Foundations of Cognitive Psychology: Preface - Preface

(Steven Felgate) #1

Sections II,A–C deal with mechanisms that enable a network to learn pat-
terns by finding an appropriate set of interconnections. In each case, initial
constraints on connectivity are specified, and the learning mechanism deter-
mines how these existing connections are strengthened or weakened. The initial
constraints usually take the form of layers of units, with connections from one
layer to the next; this architecture is supported by the layered organization of
the cerebral cortex. The mechanisms for changing the connection strengths de-
rive from Hebb’s (1949) hypothesis that when two connected neurons are active
simultaneously or in close temporal succession, the connection between them is
strengthened so that eventually the activation of one will lead indirectly to the
activation of the other. The models discussed in Section II,A—self-organizing
models—use this so-called Hebbian learning in close to its original form. The
models discussed in Section II,C learn by error correction and use a modified
version of Hebbian learning: the connection between two units changes as a
function of the activation of one unit and the error registered by the other. The
models discussed in Section II,B—autoassociators—can use either Hebbian
learning or error correction, although the latter enables them to learn many
more patterns and to distinguish grades of similarity.


A. Encoding Temporal Composites: Abstract Feature Detectors or Category Units
Sensoryneuronsarestimulateddirectlybyenergyexternaltotheorganism.
Their tuning characteristics are a consequence of their inherent transducing
properties and are innately fixed. For example, the inner hair cells convert me-
chanical deformation of the basilar membrane into neural signals. In contrast,
neurons beyond the sensory periphery are stimulated by other neurons that
connect to them, not by the environment directly. Their tuning characteristics
are based on the pattern of stimulation they receive from other neurons. These
can be called abstract feature detectors or category units (because they encode
entire categories).
The connectivity that achieves this can be learned by a class of learning
models called self-organizing neural nets (Grossberg, 1970, 1972, 1976; Rumel-
hart & Zipser, 1985; von der Malsburg, 1973). Grossberg’s models, the earliest
and most fully developed of this kind, have been used to model the acquisition
of auditory categories in music (Gjerdingen, 1989b, l990) and speech (Mitra,
1993). Although a detailed description of this model would require a chapter
in itself, it is possible to capture the essence of self-organizing models rather
simply. (Most of the specifics of Grossberg’s theory deal with ensuring the sta-
bility of the learned categories, and the stability of human categories is an open
question.)
The top panel of figure 19.5 shows a layer of units (input units), with pre-
existing tuning characteristics, connected to another layer (category units). The
category units are in a winner-take-all configuration, which is common in the
brain: the most active unit in such a configuration has the effect of decreasing
the activation of the other units and boosting its own activation. A pattern
presented to the network will activate the input units with the corresponding
features (filled circles). The ensuing activation of the category units depends on
the weights on the links. One of the category units will win (filled circle in the


466 Jamshed J. Bharucha

Free download pdf