Foundations of Cognitive Psychology: Preface - Preface

(Steven Felgate) #1

middle panel), and the weights on the links feeding into the winning unit
change by Hebbian learning. The links from strongly active input units are
strengthened (bottom panel). Self-organizing mechanisms have the further re-
quirement that the links to the winner from weakly active input units are
weakened. The winner is on its way to becoming a feature detector or category
unit for the entire input pattern. Similar patterns will activate this unit more
strongly, and dissimilar patterns will activate this unit more weakly, than be-
fore learning.
Self-organization can be visualized in terms of vector spaces. Consider a
network with two input units,f 1 and f 2 , and indefinitely many units in a sec-
ond layer. The units in the second layer are available to become abstract feature
detectors and may be called category units. Each category unit has two links
feeding into it, one from each input unit. The weights on these links can be
plotted as vectors (solid lines in figure 19.6) in two-dimensional feature space;
these areweight vectors—each category unit has a weight vector. A pattern
presented to the network can be plotted as a vector (dashed line) in the same
space. The weight vector that is closest in angle to the pattern vector represents
the category unit that has responded most strongly to the pattern and is there-
fore the most likely candidate for an abstract feature detector for that pattern.
The weights of this unit are changed so as to move the weight vector closer to
the pattern vector. The closer the weight vector moves to the pattern vector, the
more strongly this unit will respond to that pattern, that is, the more it devel-
ops the tuning characteristics for that pattern.


Figure 19.5
Self-organization. The winning category unit gets to learn. Links to it from highly active input units
are strengthened.


Neural Nets, Temporal Composites, and Tonality 467
Free download pdf