974 28 FEBRUARY 2020 • VOL 367 ISSUE 6481 sciencemag.org SCIENCE
PHOTO: A. LESIK/SHUTTERSTOCK
By Daniela Sammler
S
peech and music are human univer-
sals, and people around the world
often blend them together into vocal
songs ( 1 ). This entwinement of the
speech and music cognitive domains is
a challenge for the auditory cognitive
system. How do listeners extract words and
melodies from a single sound wave? The split
is surmised to start in the signal: Speech and
musical sounds are thought to differ in de-
tails of their acoustic structure and thus acti-
vate different receptive preferences of the left
and right auditory cortices of the brain ( 2 , 3 ).
On page 1043 of this issue, Albouy et al. ( 4 )
provide evidence for the biophysical basis of
the long-debated, yet still unresolved, hemi-
spheric asymmetry of speech and music per-
ception in humans. They show that the left
and right auditory regions of the brain con-
tribute differently to the decoding of words
and melodies in songs.
Research on the nature of hemispheric
NEUROSCIENCE
Splitting speech and music
PERSPECTIVES
Brain asymmetries for words and melodies of songs depend on opposite acoustic cues
INSIGHTS
Published by AAAS