Consciousness

(Tuis.) #1

  • seCtIon FoUR: eVoLUtIon


FIND X AND PUT IT IN A MACHINE
Suppose that humans have some magic ingredient ‘X’, by virtue of which they
are really conscious. If we wanted to make a conscious machine, we might then
proceed by finding out what X is and putting it into a machine, or we might build
a machine in such a way that X would naturally emerge. The machine would then,
theoretically at least, be conscious.

Chalmers (1995a) says that those who are serious about solving the hard prob-
lem need to find the right ‘extra ingredient’ to account for conscious experience.
McGinn (1999) calls the property that would explain consciousness C*, and asks
whether C* is possible in inorganic materials or not. He concludes that we cannot
possibly know. According to his mysterian theory the human intellect is incapable
of understanding how organic brains become conscious, so there is no hope of us
ever finding C* or knowing whether a machine could have it.
Others are less pessimistic. British AI researchers Aaron Sloman and Ron Chris-
ley are not deterred in their search for machine consciousness by the fact
that ‘We do not yet have the concepts necessary for fully understanding what the
problem of consciousness is’ (2003, p. 140). One of the strongest proponents
of AC is David Chalmers, who rejects the Chinese Room and other arguments
against computationalism. Even though he is a dualist of sorts, he claims that
any system with the right sort of functional organisation would be conscious.
He argues ‘not just that implementing the right computation suffices for
consciousness, but that implementing the right computation suffices for rich
conscious experience like our own’ (1996, p. 315). He does not go on to say
what ‘the right computation’ is, but he has defended a very broad notion of
computation as a foundation for artificial intelligence, claiming that in it ‘the
causal structure of mentality is replicated’ (1993/2011). So, Chalmers suggests
trying to find X as a way forward.
How might we do this? One way is to make a list of criteria for a conscious
machine: a list of possible Xs. Philosopher Susan Stuart (2007) suggests ‘engaged
embodiment, goal-directed animation, perception and imagination’ and the
ability to synthesise experiences and recognise them as its own
experiences, and she emphasises the importance of kinaes-
thetic as well as cognitive imagination.
AI researcher Igor Aleksander tackles phenomenology ‘as the
sense of self in a perceptual world’ and starts from his own intro-
spection to break this down into five key components or axioms
(Aleksander and Morton, 2007). He then uses these as criteria for
a conscious machine (Aleksander, 2007). They are:

1 Perception of oneself in an ‘out there’ world.
2 Imagination of past events and fiction.
3 Inner and outer attention.
4 Volition and planning.
5 Emotion.

On this basis, Aleksander develops an abstract architecture called the Kernel
Architecture (KA) that incorporates all five. A key mechanism is depiction: a direct

‘What is your extra


ingredient, and why


should that account for


conscious experience?’


(Chalmers, 1995a, p. 207)


‘a model that is


computationally


equivalent to a mind will


itself be a mind’


(Chalmers, 1993/2011)


FIGURE 12.11 • Can we find X and put it in a
machine?

Free download pdf