fall inside the dominance region of pitch perception) are sufficient to sharpen the neural
representations underlying pitch discrimination. Second, prolonging sound duration does
not further facilitate pitch discrimination with spectrally rich sounds, at least when dura-
tion of 100 ms is exceeded.
Most recently, the neural and behavioural accuracy of frequency discrimination across
different frequency ranges (250–4000 Hz) with pure sinusoidal vs spectrally rich sounds
was compared.^43 Also magnitude of the frequency change (2.5, 5, 10, 20 per cent) was var-
ied in both MMN and behavioural paradigms. The data showed that, in general, the spec-
trally rich sounds elicited larger MMN across the whole frequency range. Additionally, the
changes at the middle frequencies elicited larger MMN, which, in parallel, had a shorter
latency than at the lowest and highest frequencies. Replicating earlier findings, the MMN
amplitude and latency reflected the magnitude of the frequency change.^5 These neural
indices were mirrored in behavioural performance as well, the hit rate being the most accu-
rate and the reaction time fastest at the middle frequency range, with the widest deviants,
and with harmonically rich sounds. To summarize, the facilitation in pitch discrimination
caused by the spectrally rich sound structure exceeds relatively broad frequency range.
Sequential stimulation
Behavioural evidence shows that, in addition to spectral complexity, also temporal prox-
imity of adjacent sounds may facilitate pitch encoding in its several forms. For instance
Dewar et al.^44 presented subjects with standard tonal or atonal sequences followed by two
comparison tones in the no-context condition or by two comparison sequences in the full-
context condition. The listeners recognized the target tone more accurately under the full-
context condition than under the no-context condition. In addition, they performed more
accurately when the target tone was embedded in tonal rather than atonal sequences and
musicians outperformed nonmusicians in all recognition tasks. This and related findings
inspired us to investigate if the presence of Western, familiar sound context facilitates pitch
processing even at the preattentive level and, further, whether it might be enhanced by
musical expertise.^45
To this end, 10 musicians and 10 nonmusicians were presented with a 144-Hz frequency
change in three contexts. First, the frequency change was among single sounds (554 vs
698 Hz). Second, it was embedded among sound sequences in which the subsequent sounds
belonged to the Western musical scale (440– 493 – 554 – 587 – 659 vs 440– 493 – 698 – 587 – 659
Hz). Third, the change was presented within sound sequences, which were composed of
arithmetically determined intervals, compromising between Western semitones and whole-
tone steps (446– 467 – 499 – 547 – 622 vs 446– 467 – 643 – 547 – 622 Hz). During the recordings, the
subjects were (once again) reading a book of their own choice.
The MMN data indicate that,first of all, MMN amplitude was larger when the frequency
change occurred among temporally complex sound sequence with the familiar scale than
when it occurred among the unfamiliar scale and, further, larger in unfamiliar scale than
among single tones (see Figure 19.2). This suggests that musical context facilitates pitch
discrimination most effectively when it is familiar to the subjects. Second, with both famil-
iar and unfamiliar complex sound sequences, the musicians had a generalized facilitation
of the MMN in terms of a shorter MMN latency when compared to that by nonmusicians.
298