The Cognitive Neuroscience of Music

(Brent) #1

by the connections; these constraints can be learned by repeated exposure. This learning
process is either supervised by an external teaching exemplar or unsupervised via passive
exposure. In the present review, we focus on unsupervised learning algorithms that extract
statistical regularities and encode events that often occur together. The competitive learn-
ing algorithm^16 is an unsupervised learning algorithm that provides the basis for learning
in self-organizing maps17,18 and ART (adaptive resonance theory) networks.19,20
Unsupervised learning algorithms seem to be well suited to model music cognition, as
acculturation to musical structures presumably occurs without supervision in listeners.
In an artificial neural network, a set of units is linked via synaptic connections of differ-
ent strengths. The units are generally arranged into layers, with an input layer coding the
incoming information. When a stimulus is presented to the model, the input units are
activated. This activation is sent via the connections to units in other layers, and the
strength of the transmitted activation is controlled by the strengths of the connections
(i.e. weights). In parallel with biological networks, the learning process is defined as a modifi-
cation of the connection weights. For the learning process, a set of training stimuli is pre-
sented repeatedly to the network. The network adapts its connections in such a way that it
becomes sensitive to the underlying correlational structure between events of the training
set. Before learning starts, all connections are generally set to random values, a state that
represents the ‘ignorance’ of the model. Over the course of learning, the neural net units
gradually become sensitive to different input events or categories. With the competitive
learning algorithm, the specialization takes place by competition among the units.^16 When
an input is presented to the network, the unit that is best able to represent it wins the com-
petition and is allowed to learn the representation of this input even better. The unit’s
response will be subsequently stronger for this same input pattern and weaker for others.
In a similar way, other units learn to respond selectively to other input patterns. The com-
petitive learning algorithm is generalized in SOMs that contain a spatial layout of the units.
In SOMs, the winning unit and its neighbour units learn. This learning process leads to
topographic mappings between input data and neural net units on the map: For two sim-
ilar input patterns, map units that respond maximally are located near each other. This
mirrors principles of cortical information processing, such as spatial ordering in sensory
processing areas (i.e. somatosensory, vision, audition). In the primary visual cortex, the ori-
entation of stimuli to which cells respond best changes in an orderly fashion with nearby
cells responding best to similar orientations.^21 In the auditory system, tonotopic organiza-
tion is found at almost all major stages of processing (i.e. inner ear, auditory nerve, cochlear
nucleus, auditory cortex), and the auditory cortex displays a tonotopic organization in
which nearby cells respond best to similar frequencies.22–25
According to Kohonen,^26 the SOM algorithm parallels the adaptive processes underlying
plasticity in biological networks: the theoretical definition of neighbourhood results in
learning that is similar in consequence to lateral connections modelled by chemical
processes. The principles of self-organization are general across domains, but input repres-
entations are domain specific. The SOMs have been applied to a variety of domains,
including visual perception,^27 semantic processing^28 or auditory perception.29,30Mercado
et al.^30 used an SOM to simulate experience-dependent reorganization of representation


   111
Free download pdf