Handbook for Sound Engineers

(Wang) #1

956 Chapter 25


ment, to encompass this total range without gain adjust-
ment begs to be answered. Practically, though, the
dynamic range of nearly any meaningful source (that
could end up in a mix) or a finished product (that people
might want to spend money to listen to) is actually
considerably less than that of available digital hardware
implemented and used properly. However, one must not
get too cavalier with this approach—clean masters
(either multitrack or mixed) should retain a dynamic
range well in excess of the intended distribution
medium to allow for losses in reduction and processing
of the masters for same. Just a little compression can put
a big dent in dynamic range.
Second is a matter of sampling rate. This, possibly
because it is of less obvious sonic consequence than
resolution, effects due to its have taken longer to appre-
ciate and resolve. Although Nyquist sampling can, no
question, allow the reconstruction of near half-Nyquist
frequencies, it belies the fact that two samples per cycle
is insufficient to determine the frequency of and the
exact level of a signal if either is changing—a consider-
able number of samples are needed over several cycles
and even then the dynamic reconstruction is pretty
smeared and sloppy as a consequence of the
time-domain response of the reconstruction filter.
Nyquist sampling works quite well for audio since there
isn’t that much sonic information up around 24 kHz, and
by and large what there is doesn’t move too quickly. But
this smearing effect of close-in band-limiting
antialiasing filters really made a mess of highly transient
material in those big bad days before oversampling.


25.18.1.4 AntiAliasing Filters


Early converters used brutal brick-wall filters at the
Nyquist frequency to prevent ultrasonic frequencies
from being mirrored down (aliased) into the audio and
prevent further ultrasonic signals from heterodyning
with the sampling frequency.
For example, a 40 kHz signal passing into a 48 kHz
rate sampler will produce an 8 kHz by-product that will
definitely become audible upon signal reconstruction.
There is no way these filters could ever be described as
anything other than a bad thing. Their temporal
response was appalling, their effect reaching far, far
down into the desired audio passband. More than
anything else, it was these filters that gave digital audio
a bad name in its early days.
Sigma-delta converters come to the rescue. Over-
sampling, a technique of taking and reconstructing
samples at a multiple of the sampling rate (4, 8, 16, or
so), allowing the nasty filters to be both relaxed in


brutality and moved correspondingly higher in
frequency, dramatically improved this situation—the
filters had far less in-band effect. Sigma-delta
converters typically initially sample 64, 128, or even
256 times above the nominal sample rate with the
consequence that the antialiasing can be reduced to as
little as a gentle single or double-pole filter; the band
limitation is done inside the converters by a phase-linear
FIR filter, with considerably reduced sonic impact.
Nevertheless, there have been experiments that indi-
cate that even such benign internal filters at 20 kHz are
with some program material and under some conditions
audible, in comparison to the same class of filter set
twice as high in frequency. Since the only way to prop-
erly engineer such a filter at 40 kHz is to double the
sample rate, it seems that the predominant improvement
(and ever-so-slight at that) of a 96+ kHz system is not
the increased bandwidth available—arguments will
continue to rage about our ability to hear/sense stuff up
there, and even the desirability of its existence—but that
doubling the rate is the only means of pushing the last
vestiges of filter effects from audibility. Since this
means doubling the amount of processing hardware in a
system, it is not a light decision.

25.18.1.5 Types of A/D converters

There are three types of converters with possible appli-
cation to digital audio. Although without question the
sigma-delta type rules the roost in pro audio, enough
applications use flash and successive approximation for
them to be considered here.

Flash Conversion. Flash conversion involves a long
train of comparators, such that a given signal amplitude
will trip a given number of comparators and fairly simple
conversion logic can turn their outputs into a binary
word. It is the fastest conversion method as far as logic
propagation times; a change in input level is instantly
reflected in output code. The down side is the sheer
number of comparators needed for a sensible size word
width, one for each possible level of resolution; also the
offset inaccuracies of the comparators tend to dwarf the
required resolution! This said, they are little used, except
in some hybrid converters where a 4-bit flash convertor
will provide the major resolution of a wider word,
leaving the remainder to a more accurate type.

Successive-Approximation Encoder. The successive-
approximation encoder is a very common form of
encoder, especially where high speed at high accuracy
and with low latency (processing delay time) are
Free download pdf