Consciousness

(Tuis.) #1

  • seCtIon FoUR: eVoLUtIon
    Note that IDA and LIDA are software agents and so, like KA,
    are not permanently tied to any particular physical machine,
    raising the question of just what it is that we think might be
    conscious. Could a complicated mass of software or a virtual
    machine without a single material body be conscious? Could
    it be that self-representations rather than people (or bats, or
    octupuses) are the subjects of experience? If so, there might
    already be conscious entities living in cyberspace and sup-
    ported by multiple machines in different locations. Come to
    think of it, when we refer to ourselves as conscious, are we
    referring to our bodies, our brains, our inner selves, or some-
    thing else entirely? This is another interesting conundrum that
    MMC may throw light on.
    A strongly embodied example is Owen Holland’s CRONOS,
    an anthropomimetic upper-body robot designed to include
    internal models of itself and the world (Holland, 2007; Holland,
    Knight, and Newcombe, 2007). The idea of internal models
    began over sixty years ago when Craik suggested that intel-
    ligent organisms might need small-scale models of external
    reality and of their own possible actions. For a long time Hol-
    land rejected this idea and developed purely behaviour-based
    robotics with no internal models, but he later returned to this
    principle in creating CRONOS and its successor ECCE Robot.
    With human-like elastic muscles and tendons, and a bone-like
    skeleton, CRONOS has a single colour camera for an eye, an
    elongated neck to help it inspect objects, and complex moving
    arms. It builds models of the world around it by moving its eye
    and looking at and interacting with objects, and uses a model
    of its own body and capabilities to plan its possible actions. It
    does not interact with people, however, and has no language
    or emotions.
    How does this relate to consciousness? The robot has a model
    of itself, the Internal Agent Model (IAM), and a model of the
    world, the Internal World Model (IWM), and it uses these to
    act and to track changes in its own body and beyond. These
    internal simulations form the basis for what its creators call ‘functional imagina-
    tion’: the ability to manipulate information that is not directly available to sensors
    (Marques and Holland, 2009). These models may not be completely detailed and
    accurate, but they are the only self and world that the robot knows. ‘Unless it
    somehow had access to the information that it was only a model, the IAM would
    operate as if it were the agent itself ’ (Holland, 2007, p. 101). Critically, this means
    that the IAM would be transparent in the sense used by Metzinger (2003, 2009)
    when describing his phenomenal self-model: the robot depends on a model
    which doesn’t include the fact that it’s a model. This model would in some sense
    describe itself as an embodied agent – rather as we do.


Gauges in the head, Szpindel had called them. But there were other
things in there too. There was a model of the world, and we didn’t

look outward at all; our conscious selves saw only the
simulation in our heads, an interpretation of reality,
endlessly refreshed by input from the senses. What
happens when those senses go dark, but the model –
thrown off-kilter by some trauma or tumor – fails to
refresh? How long do we stare in at that obsolete
rendering, recycling and massaging the same old data
in a desperate, subconscious act of utterly honest denial?
How long before it dawns on us that the world we see no
longer reflects the world we inhabit, that we are blind?

(Peter Watts, Blindsight, 2006, p. 193)

Some other examples of theory-led robotics might include
Giulio Tononi’s (2015) integrated information theory (if you
build integration into a machine in ways that increase Φ, the cor-
responding degree of consciousness should follow) or Michael
Graziano’s attention schema theory (if a system can model its
own attention, it can lay claim to consciousness) (Webb and
Graziano, 2015).
According to quantum theories, none of these implementations
would produce real consciousness because that needs quantum processes. For
example, in Penrose and Hameroff ’s version, consciousness emerges from quan-
tum coherence in the microtubules, so one would need to build a quantum com-
puter that achieved this kind of integration across its system. One might then
conclude that it was really conscious.
None of this avoids the two big problems mentioned at the start of this section.
First, we do not know what consciousness is. Each of these theories (and many
others) says something about what consciousness is or what it emerges from, but
if the appropriate machine were built, critics could still argue that this particular
theory was wrong and therefore the machine was not conscious after all. Second,
we have no test for proving whether a machine is conscious or not, so even if
one of these machines claimed to be conscious, stayed awake all night worrying
about consciousness, and passed the Turing test, we could still not convince scep-
tics that it was really conscious, even though we might have learned a lot from the
machine.

DELUDED MACHINES
There is a completely different way of thinking about X. Perhaps consciousness
is not what it seems to be, and we are in some fundamental way deluded about
the nature of consciousness. According to this view, we may believe we are con-
scious observers, experiencing a continuous stream of contents passing through
our conscious minds, but we are wrong because there is no Cartesian Theatre,
no audience, no ‘actual phenomenology’, and no continuous stream of conscious
experiences (Dennett, 1991; Blackmore, 2002, 2012). We humans certainly seem
to be conscious, and that requires explaining, but the right kind of explanation

‘Engineering will step


from the mere design of


complex artefacts to the


design of subjects’


(Chella and Manzotti, 2007,
p. 11)


PRoFILe 12.3


Owen Holland (b. 1947)
Owen Holland is best known
for his work on machine con-
sciousness and for building
biologically inspired robots,
but he only started robotics as
a hobby in 1988 after work-
ing as a production engineer,
boatbuilder, transport man-
ager, insurance salesman, and chef in a steak bar. He
had a croft in Orkney for eight years where he built his
own house and tended cows, goats, ducks, and chickens,
grew oats, and made hay. Just as eclectically, he has had
academic positions in psychology, electrical engineering,
computer science, and cognitive robotics at universities in
England, Scotland, Germany, Switzerland, and the USA.
He worked on two robot projects at Caltech and helped
set up the robotics lab at the University of the West of
England, Bristol. Holland used the biologically inspired
robot CRONOS to ask whether it could be phenomenally
conscious according to various theories of consciousness.
CRONOS has since developed into the anthropomimetic
ECCE robot which, with its human-like structures, may
lead to human-like cognition. He is now Emeritus Pro-
fessor of Cognitive Robotics in the Sackler Centre for Con-
sciousness Science at the University of Sussex.
Free download pdf