Handbook of Psychology, Volume 4: Experimental Psychology

(Axel Boer) #1

134 Audition


input. Thus, the same decibel change in sound level produces
a smaller change in neural output at high-sound levels than at
low-sound levels. This compressive nonlinearity may also be
the cause of a difference between simultaneous and forward
tonal masking. In simultaneous tonal masking, the signal must
change by about 1 dB for each 1 dB change in masker level in
order for constant signal detection to occur. In forward mask-
ing, a change of less than 1 dB for each decibel change in
masker level is required for constant detection. This change
in masking slopes between simultaneous and forward mask-
ing may result because in simultaneous masking, both the
signal and masker undergo the same form of compression. In
forward masking, the temporal separation between the masker
and signal results in the lower-level signal undergoing a dif-
ferent form of compression than that for the higher-level
masker (Moore, 1995).


Temporal Modulation Transfer Functions


Most sounds change in their overall level over time (these
sounds are amplitude modulated). The temporal modulation
transfer function is one measure of the auditory system’s abil-
ity to detect such level changes. A noise waveform is ampli-
tude modulated such that its overall amplitude varies from a
low to a high level in a sinusoidal manner. Listeners are asked
to detect whether such dynamic amplitude modulation oc-
curs. The depth of modulation (the difference between the
peak and valley levels) required for modulation detection
(i.e., the ability to detect a difference between a noise with no
modulation and a noise sinusoidally amplitude modulated) is
determined as a function of the rate at which the amplitudes
are modulated. As the modulation rate increases, the depth of
modulation must increase to maintain a threshold ability to
detect modulation. That is, at low rates of modulation, only a
small depth of modulation is required to detect amplitude
modulation. As the rate of modulation increases, the depth of
modulation required for modulation detection also increases
in a monotonic manner. The function relating threshold depth
of modulation to the rate of modulation resembles that of a
lowpass filter. The lowpass form of this function describes
the temporal modulation transfer function for processing
temporal amplitude changes (Dau, Kollmeier, & Kohlraush,
1997; Viemeister & Plack, 1993).


DISCRIMINATION


Measures of the ability of listeners to discern differences in
frequency, level, and the timing properties of sounds is often
tied to the nineteenth-century observations of Weber and
Fechner. The Weber fraction states that the just-noticeable


difference between two stimuli is a fixed proportion of the
value of the stimuli being judged. The Weber fraction for fre-
quency, level, and duration have been measured for a variety
of acoustic signals.
For sound level, listeners can detect between a 0.5- and
1.5-dB level difference (Jesteadt, Weir, & Green, 1977). For
tonal stimuli, the Weber fraction is somewhat dependent on
overall level, leading to a near miss to the Weber fraction.
The Weber fraction for noise stimuli is constant at about
0.5 dB as a function of overall level, such that there is not a
near-miss to Weber’s fraction for noise signals. The just-
noticeable difference for tonal frequency is about 0.2–0.4%
of the base frequency; for example, trained listeners can just
discriminate a 1002-Hz tone from a 1000-Hz tone (Weir,
Jesteadt, & Green, 1977). There is not a constant Weber
fraction for most measures of temporal discrimination.
Changes in duration can affect the detectability and loudness
of sound, making it difficult to obtain unconfounded mea-
sures of duration discrimination (Abel, 1971; Viemeister &
Plack, 1993).

SOUND LOCALIZATION

Sound Localization in Three-Dimensional Space

Sound has the properties of level, frequency, and time, but
not space. Yet, the sound produced by an object can be used
by most animals to locate that object in three-dimensional
space (Blauert, 1997; Gilkey & Andersen, 1997). A different
set of acoustic cues is used for sound localization in each
plane. The location of a sound source is determined by neural
computations based on these cues.
In the horizontal or azimuth plane, left-right judgments
of sound location are most likely based on interaural differ-
ences of time and level (Wightman & Kistler, 1993; Yost &
Gourevitch, 1987). The sound from a source will reach one ear
(near ear) before it reaches the other ear (far ear), and the in-
teraural difference in arrival time (or a subsequent interaural
phase difference) is a cue for sound localization. However,
given the small maximal interaural time difference due to
the size of the head, this cue is probably only useful for low-
frequency sounds. The sound level at the near ear will be
greater than that at the far ear, primarily because the head pro-
duces a sound shadow at the far ear. The sound shadow is pro-
portional to frequency, so that interaural level differences most
likely provide cues for sound localization at high frequencies.
The fact that interaural time provides a cue for sound location
at low frequencies and interaural level differences a cue at
high frequencies is referred to as the duplex theory of sound
localization (Yost & Gourvitvch, 1987).
Free download pdf