Consciousness

(Tuis.) #1

Chapter


Twelve


The evolution of machines


now imagine the effects of lots of other synapses on the
same cell but at slightly different times and distances from
the cell body. these all add up so that if the polarisation
at the cell body reaches a critical threshold, the second cell
fires. the process of summation is analogue, but the final
output – to fire or not to fire – is digital. It is not possible to
characterise the brain as simply either digital or analogue.
Serial v. parallel. many digital computers, and
certainly all the early ones, process information very fast,
but serially, i.e. one thing after another. they have a single
central processing unit and can work simultaneously on
different tasks only by dividing the tasks up and switching
between them. By doing this a serial machine can simulate
a parallel machine.
By comparison, neurons operate very slowly, but the brain
is massively parallel, with no central processor but millions
of simultaneously operating cells. to some extent, this
massive parallelism compensates for lack of speed. even
so, the overall computing speed of the latest supercom-
puters is estimated at four times as fast as a human brain.
Brains have different areas for vision, hearing, planning,
and so on, all operating in parallel all the time, and within
small areas of the brain patterns of information move about
in complex networks that have no serial organisation. the
brain does seem to have bottlenecks, though, such as limited
short-term memory and attention (Chapter 7). Also many
outputs, including spoken and written language, are serial. In
this sense, the brain is a parallel machine simulating a serial
machine; this is Dennett’s Joycean machine (Chapter 11).

FIGURE 12.2 • The state of polarisation of any part of the postsynaptic
membrane varies continuously depending on the effects
of many synapses at different distances (an analogue
process). When the polarisation at the cell body reaches
a critical threshold, the second cell fires (a digital
process).

law), made in 1965, seems to have held ever since,
though in the past few years the rate of change has
slowed to a doubling only every two-and-a-half
years, and some people predict saturation within
the next decade  – though then again, if you apply
the prediction to neural networks, it keeps being
outstripped. This remarkable expansion describes
brute computing power, but there have been more
fundamental changes in AI that are relevant to
understanding consciousness.


CONNECTIONISM


The 1980s saw the flowering of ‘connectionism’, a
new approach based on artificial neural networks
(ANNs) and parallel distributed processing. Part of
the motivation was to model the human brain more
closely, although even twenty-first century ANNs
are extremely simple compared with human brain
cells. The many types of network include recurrent,
associative, multilayered, and self-organising. The
big difference from GOFAI is that ANNs are not pro-
grammed: they are trained. To take a simple exam-
ple, imagine looking at photographs of people and
deciding whether they are male or female. Humans
can do this easily (although not with 100% accuracy)
but cannot explain how they do it. So we cannot use
introspection to teach a machine what to do. With an
ANN, we don’t need to. In supervised learning, the
system can be shown a series of photographs and for
each one produce an output: male or female. If this
is wrong, the synaptic weights are adjusted and the
network is shown the next, and so on. Although it
begins by making random responses, a trained net-
work can correctly discriminate new faces, as well as
ones it has seen before.


How does it do this? Even a simple network con-
sists of many units, each resembling a neuron in the
sense that it sums the inputs it receives according to
a mathematical function, and produces an output
(a ‘1’ or a ‘0’, fire or don’t). The units are connected
in parallel, each connection having a weight, or
strength, that can be varied. A simple network might
consist of three layers: an input layer, a hidden layer,
and an output layer. For the example of faces, the
input layer would need enough units to encode an
array corresponding to the photographs (e.g. one
for each pixel), and the output layer would need
one unit that outputs ‘0’ for male, ‘1’ for female. For

Free download pdf