learning rules used to train up networks). After a number of trials on a
given set of inputs and a series of modiWcations of the connection weights,
the networkmaysettle down and converge on a set of weights and biases
which gives outputs reliably close to the desired values. (But it should be
noted that this may require a very long run of trials, and it is entirely
possible that the network will never converge on a successful set of weights
and biases at all.) The creation of various connectionist learning-algo-
rithms may thus be seen as attempts to modellearningas a natural process,
providing, in fact, a sort of low-level implementation of the behaviourist
law of eVect.
One of the chief attractions of the connectionist approach is thatwedo
not need to work out in detail how a particular cognitive function is to be
discharged before attempting to model it. For the algorithms used in
connectionist modelling are not algorithms for solving the task being
modelled, but algorithms of back-propagation, for modifying the proces-
sing connectivities in the light of output error. This spares us a diYcult and
sometimes intractable task of working out how a particular function could
be discharged; for example, connectionist networks have been more suc-
cessful at pattern-recognition than any programs speciWcally written to
recognise patterns. It may also have the advantage of preventing us from
imposing explicit cognitive structures on implicit, natural cognitive sys-
tems.
Above all, the chief diVerence between connectionist modelling and the
classical computational approach is that there are no symbolic represen-
tations within the network. Rather, representation isdistributedacross the
network, in such a way that the whole system can be said to be representing
the contentthe cat is on the mat, say, while no particularpartsof the
network represent that content. Most networks operate bysuperpositional
storage, in fact, so that a wide range of diVerent items of information may
be stored in one and the same set of weights and biases. This gives rise to
further features of connectionist networks which many peopleWnd attrac-
tive. For example, connectionist networks (like many human cognitive
systems) displaygraceful degradationwhen damaged – disabling a single
node in a trained-up network may reduce the eYciency of its processing
somewhat, but is unlikely to prevent it from functioning. In contrast,
blocking a particular stage in the processing of a classical computer is
likely to make the whole system crash, and removing any given symbolic
structure from a data store will delete the corresponding item of infor-
mation. Whether these features really do provide reasons for preferring
connectionist modelling to the symbolic/computational approach is a
topic for discussion in chapter 8.
22 Introduction: some background