Consciousness

(Tuis.) #1

otherwise sit still; the second let him
wander around without crashing into
things; and the third made him explore
by looking for distant places and trying
to reach them. Correction signals oper-
ated between all three. Such a creature’s
overall behaviour looks intelligent to an
observer but, says Brooks, ‘It is only the
observer of the Creature who imputes a
central representation or central control.
The Creature itself has none; it is a col-
lection of competing behaviors’ (1997,
p. 406).


This is related to Marvin Minsky’s (1986)
idea of ‘the society of mind’, in which
intelligence emerges from many sepa-
rate modules doing many simple things
all at once; to Ornstein’s (1991) descrip-
tion of the mind as a ‘squadron of simple-
tons’; to Dennett’s (1991) replacement of
the inner audience and ‘Central Meaner’ with a pandemonium of stupid machine-
like homunculi; and to Clark’s (2013) argument that the mind is best understood
as a distributed ‘prediction machine’. By building robots this way, Brooks discov-
ered that ‘When we examine very simple level intelligence we find that explicit
representations and models of the world simply get in the way. It turns out to
be better to let the world itself serve as its own model’ (1991, p. 396). Although
Brooks makes no claims to biological significance, this is the same conclusion that
Kevin O’Regan, Alva Noë, and others came to from studying change blindness in
humans (Chapter  3). It seems that representations of the world may not always
be necessary for building effective robots, and evolution may not have used them
when building our vision system, either. Representations are still crucial in other
respects: for the sensorimotor theory in storing knowledge about the laws of sen-
sorimotor contingency, and for predictive coding in providing generative models
based on prior experience. But the representations are not 1:1 mappings of the
‘outside’ world.


All this is highly relevant to understanding consciousness. Along with GOFAI goes
the idea that conscious experiences are mental models or inner representations of
the world. Although intuitively plausible, this idea is problematic. For example, it is
not clear how a mental model can be an experience, nor why some mental models
are conscious while most are not. These are the familiar problems of subjectivity,
and of the magic difference between conscious and unconscious processes.


Doing away with representations may solve some problems, but it raises others.
In particular, the nonrepresentational approach has difficulties dealing with expe-
riences that are not driven by continuous interaction with the outside world, such
as reasoning, imagining, and dreaming. On representational theories, it is easy
to think that when I dream of drowning in huge waves, my brain is constructing
representations of sea, water, and waves, and simulating death; but if there are no


‘It turns out to be better
to let the world itself
serve as its own model.’

(Brooks, 1997, p. 396)

FIGURE 12.5 • William Grey Walter with one
of his famous ‘tortoise’ robots
photographed in 1951. He
built two prototypes, Elmer and
Elsie, in Bristol in 1948–9.
Later six more were built, and
displayed at the Festival of
Britain in 1951. They had a
photocell eye, two vacuum tube
amplifiers that drove relays
to control steering and drive
motors, and a Perspex shell with
a switch that operated when the
shell contacted anything. They
moved about autonomously in a
lifelike manner, demonstrating
the beginnings of artificial
intelligence, and showed a form
of self-preserving behaviour by
crawling back into their charging
hutch when their batteries ran
low. In 1995 what was thought
to be the last remaining Grey
Walter tortoise was found and
repaired by Owen Holland, and
finally ended up in the Science
Museum.
Free download pdf