Consciousness

(Tuis.) #1

machine. This question is much more difficult than the already
difficult question posed by Turing. When he asked ‘Can a
machine think?’, he could cut through arguments about defini-
tions by setting an objective test for thinking.


This just doesn’t work for consciousness. First, the arguments
about definitions are just as bad, if not worse, because there is
no generally agreed definition of consciousness beyond say-
ing that it means subjective experience, or ‘what it is like to be’
(Chapter 1). Yet many people have a strong intuition that there
is nothing arbitrary about it. Either the machine really does feel,
really does have experiences, and really does suffer joy and pain,
or it does not. This intuition may, of course, be quite wrong, but
it stands in the way of dismissing the question ‘Can machines be
conscious?’ as merely a matter of definition.


Second, there is no obvious equivalent of the Turing test for consciousness. If we
agree that consciousness is subjective, then the only one who can know whether
a given machine is conscious is the machine itself, and so there is no sense in
looking for an objective test.


The problem becomes clearer if you try to invent a test. An enthusiastic robot-
builder might, for example, suggest that her machine would count as conscious
if it cried when pricked, replied ‘yes’ when asked whether it was conscious, or
pleaded with people not to turn it off. But the sceptic would say, ‘It’s only got to
have an audio recording and a few simple sensors inside it. It’s only pretending to
be conscious. It’s a zombie behaving as if it’s conscious’.


Suppose she decided that it had to laugh at jokes, understand the plot of Hamlet,
and look deep into your eyes and make you feel loved, and then actually made
such a machine. The sceptic might say, ‘It’s still only pretending to be conscious.
It’s just programmed to respond in human-like ways to jokes and Shakespeare’s
plays. It doesn’t really love you.’ Then the robot builder might respond, ‘But I know
it’s conscious because if it has a sense of humour, understands human tragedy,
and can manipulate human emotions, it must be’.


You may have noticed two very familiar arguments here. The robot-builder is a
kind of functionalist. She believes that thoughts, beliefs, and subjective experi-
ences are all functional states, so if her robot carries out certain functions then
it must be conscious  – not because it has some extra mysterious stuff called
consciousness that causes those things to happen, but because that’s what con-
sciousness is. In other words, any machine that could understand Hamlet, or look
at you in that special way, would necessarily have the linguistic and emotional
capacity either to have subjective experiences and be conscious, or – in illusionist
terms – to claim to have subjective experiences, and think it was conscious.


The sceptic, meanwhile, is a consciousness inessentialist. He believes in zombies.
He thinks that however impressive the actions of the machine were, they would
not prove it was conscious. His answer would always be, ‘It’s only pretending.
Even if it could do everything you and I do, there would still be nothing it was like
to be that machine. There would be no light of consciousness on inside’.


If both of these positions are assumed to be legitimate (and they may not be),
then there can be no simple test for machine consciousness. Even if functionalists


‘at least one kind of
computer can be
conscious: the human
brain’

(Prinz, 2003, p. 112)

‘No one thinks QRio is
conscious’

(Greenfield, in Blackmore, 2005,
p. 98)

FIGURE 12.8 • If a robot told you its life story,
looked hurt when you offended
it, and laughed at your funny
stories would you think it was
conscious? How could you tell?
Free download pdf