Foundations of Cognitive Psychology: Preface - Preface

(Steven Felgate) #1

After learning, the network was presented with temporal composites that were
similar to but not identical to one of the learned composites. For example, the
network was presented with a composite in which the pitch classes D, E, F, G,
and A were equally active (i.e., the vector 0,0,1,0,1,1,0,1,0,1,0,1). The network
recognized that this pattern was more similar to the C major composite than
to any other and significantly activated all and only the diatonic pitch classes
among the expectation units, including C, which was missing in the input.
With invariant pitch-class units, an autoassociator can learn modes. Bharucha
and Olney (1989) presented an autoassociator with binary modal composites
of 10 North Indianra ̄gs. After the network learned them, it was tested with
incomplete patterns.Ra ̄gBhairav, for example, contains the invariant pitch
classes: 1,1,0,0,1,1,0,1,1,0,0,1 (which in C major would be C,D[,E,F,G,A[,B).
When the network was presented with all the tones except the second scale
degree (D[), all the scale degrees were activated among the expectation units,
including the missing second scale degree. The network generated these expect-
ations with a much smaller set of tones: the third, fourth, sixth, and seventh
scale degrees were sufficient to suggestBhairav.


C. Learning Sequences
The expectations that derive from the above system areschematic—expectations
for classes of events rather than specific event tokens—based on familiarity
with a musical culture (Bharucha & Todd, 1989). They are also not sequential,
but rather represent global states or backgrounds against which the actual
sequences of events are heard. Yet tonal or modal composites can also serve as
the basis for encoding specific sequences. A memory for specific sequences,
when activated by appropriate context, generatesveridical expectancies—the
cues that enable us to anticipate or recognize the next event in a familiar piece
and that underlie our ability to perform from memory.
The system shown in figure 19.8 is a sequential memory that serves this
function and has the added bonus that while it learns specific pieces it
also learns something about the sequential regularities—sequential schematic
expectancies—ofthestyle.Thearchitectureissimilartothatoftheautoasso-
ciator in figure 19.7 in that there is a set of input units and expectation units.
The input feature space is given more dimensions to include additional features
that play a role in cueing one’s memory for the continuation of a sequence.
Candidates for these additional features are contour, timbre, aspects of rhythm
and, because human memory is highly contextual, even aspects of the extra-
musical context that might cue memory; these additional context units could
conceivably receive input from systems far afield from the auditory system.
The system works by generating an expectation for the next event in a se-
quence, based on a temporal composite of the sequence thus far. As each new
event is heard, it adds to the composite, and the new composite generates an
expectation for the following event. The units in the middle, unlabeled, layer of
figure 19.8 are calledhidden units. They are necessary if the system is to be able
to learn the full range of possible transitions in musical sequences. Each hidden
unit computes a nonlinear, monotonically increasing function, as do neurons:
the more strongly activated a neuron, the stronger its response, but because
of physical limitations, the response strength asymptotes. One of the more


Neural Nets, Temporal Composites, and Tonality 471
Free download pdf