The Cognitive Neuroscience of Music

(Brent) #1

over shared processing, since each hemispheric system can specialize in one domain, thus
leading to an overall enhancement for the organism as a whole.
If we think about the data presented earlier in this chapter within this context, it seems obvi-
ous that the two domains for which the auditory cortices in each hemisphere have become spe-
cialized are, roughly, speech and tonal patterns. A bit of additional reflection suggests a unifying
hypothesis to explain these complementary functional specializations: speech and tonal stim-
uli differ in their acoustic structure, and hence in their processing requirements. Whereas the
analysis of speech requires good temporal resolution to process rapidly changing energy peaks
(formants) that are characteristic of many speech consonants (see, for example, the work of
Tallal et al.^35 ), it can be argued that tonal processes instead require good frequency resolution.
In a truly linear system, temporal and spectral resolution are inversely related, so that improv-
ing temporal resolution can only come at the expense of degrading spectral resolution and vice
versa. This tradeoff naturally arises from a fundamental physical constraint in acoustic
processing: better resolution in the frequency domain can be obtained only at the expense of
sampling within a longer time window, hence degrading temporal resolution; conversely,
high resolution in the temporal domain entails a degraded spectral representation. The auditory
nervous system is, of course, a highly nonlinear and distributed system; yet, it may also respect
this fundamental computational constraint, such that in the left auditory cortex the high
temporal resolution needed to process speech imposes an upper limit on the ability to resolve
spectral information, and vice versa for the right auditory cortex. To put it more simply, the
hypothesis is that there may be a tradeoff in processing in temporal and spectral domains, and
that auditory cortical systems in the two hemispheres have evolved a complementary special-
ization, with the left having better temporal resolution, and the right better spectral resolution.
According to the foregoing idea, therefore, the hemispheric differences we see in the
tonal perception literature reviewed above might reflect this more fundamental level of
functional organization. If this hypothesis is correct, then we ought to be able to obtain
evidence for differential response within left and right auditory cortices by manipulating
temporal and spectral parameters of an auditory stimulus, even if it is not particularly
perceived as speech or music. A recent functional imaging study from our lab^36 set out to
do just that using a parametric approach. Rather than looking for differences in cerebral
blood flow between a control and an activation condition, we examined the functional
changes in the brain that correlated with a given input parameter. This approach can be
particularly powerful since it helps to isolate brain activity that is specifically related to the
parameter of interest. We first created nonverbal stimuli that varied independently and
systematically along two dimensions, one temporal, the other spectral. The stimuli were
merely a series of pure tones that varied in frequency and duration; in one set of conditions
the frequency change was held constant and the temporal rate became faster across scans,
while in the other set of conditions the rate was held constant and the frequency differences
became finer across scans. We predicted that increasing the rate of temporal change would
preferentially recruit left auditory cortical areas, while increasing the number of spectral
elements would engage right auditory cortical regions.
The results of greatest relevance for the present discussion were that cerebral blood flow
in a region of the left auditory cortex showed a greater response to increasing temporal
than spectral variation, whereas a symmetrical area on the right showed the reverse


  241
Free download pdf