Consciousness

(Tuis.) #1

  • seCtIon FoUR: eVoLUtIon
    can assess the current situation and take evasive
    action. This approach is sometimes called situ-
    ated robotics, or behaviour-based (as opposed to
    knowledge-based) robotics.
    One implication is that intelligent behaviours can
    emerge from simple systems, perhaps holding
    out the hope that consciousness might do the
    same. There are many examples of such emer-
    gence in biology. For example, termites build
    extraordinary structures that look as though they
    must be planned, when in fact they emerge from
    simple rules about when to add or remove mud,
    embodied in the individual termites. Emergent
    intelligence in social insects is the inspiration
    behind the field of swarm robotics (Brambilla
    et al., 2013), in which large numbers of simple
    robots following relatively simple rules can
    produce multiple complex swarm behaviours,
    whether for use in medicine, disaster rescue, or autonomous warfare.
    As for single robots – imagine watching a small wheeled robot moving along next
    to a wall. It does not bump the wall or wander far away from it, but just wiggles
    along, reliably following the bends and turning the corners. How? It might have
    been programmed to follow the wall using a detailed internal representation of
    the area and instructions for coping with each eventuality, but in fact it need not
    be. All it needs is a tendency to veer to the right, and a sensor on the right side to
    detect close objects and make it turn slightly to the left whenever it does so. By
    balancing the two tendencies, wall-following behaviour emerges.
    This is a good example of that slippery concept, an emergent property. An appar-
    ently intelligent behaviour has emerged from an extremely simple system. This
    might help us to consider whether consciousness could also be an emergent
    property of a physical system, as some believe it is (Humphrey, 1987; Mithen,
    1996; Searle, 1997; Feinberg and Mallatt, 2016).


INTELLIGENCE WITHOUT REPRESENTATION


Traditional AI assumed that intelligence is all about manipulating representa-
tions, yet our wall-following robot managed with none. How much further could
this go? To find out, Rodney Brooks and his colleagues at MIT spent many years
building robots with no internal representations (Brooks, 1997, 2002).
Brooks’s ‘creatures’ can wander about in complex environments such as offices
or labs full of people and carry out tasks such as collecting rubbish. They have
several control layers, each carrying out a simple task in response to the envi-
ronment. These are built on top of each other as needed and have limited
connections enabling one layer to suppress or inhibit another. This is referred
to as ‘subsumption architecture’ because one layer can subsume the activity of
another. Brooks’s robot Allen, for example, had three layers: the lowest prevented
him from touching other objects by making him run away from obstructions but

FIGURE 12.4 • A termite mound in West Bengal,
India. Each individual termite
follows simple rules about when
to add mud and when to remove
it. None has a plan of the overall
mound, yet the complex system
of tunnels and walls emerges.
Is consciousness an emergent
phenomenon like this?

Free download pdf