The Philosophy of Psychology

(Elliott) #1
Figure 1.1 A simple three-layered network

layers of processing units (‘neurons’). In principle, one can have as many
layers in a network as desired, insertinghidden unitsbetween the input and
output layers. But in order to get a multi-layered network to converge on
anything like the reliable discharge of a cognitive function, appropriate
training procedures or learning rules had to be available. It was the
discovery of algorithms for modifying the set of weights and biases within
multi-layered networks under training which has made recent progress
possible. An illustration in terms of a simple, three-layered, feed-forward
network (seeWgure 1.1) may help to make this comprehensible.
The network is feed-forward in that processing goes from input units
through hidden units to output units, with no internal looping (a
complication which could be added). The connections between units are
assigned various weights, and there will usually also be some numerical
bias assigned to each hidden unit and output unit. It may not matter very
much what initial values are assigned to weights and biases before training
up the network on some set of inputs and desired outputs. The important
thing about the training process is that there should be a systematic way of
modifying the set of weights and biases within the network, in response
to discrepancies between the actual output and the desired output (see
Bechtel and Abrahamsen, 1991, ch.3, for more information about the


Developments in psychology 21
Free download pdf