- seCtIon FoUR: eVoLUtIon
for the real world’ (1998b, p. 29). In 1995,
these restrictions were lifted and the
rules gradually changed. In 2008, each
judge was given five minutes to hold
simultaneous conversations with the
competitor and a human, and the win-
ner fooled three of the twelve judges
into believing it was human. This comes
very close to passing Turing’s test. Since
2010, the competition has involved 25
minutes of conversation, and the com-
petition will end when the $100,000
prize is awarded for a machine that can
fool half the judges in a multimodal Turing test including understanding music,
speech, pictures, and videos.
Suppose that a machine does pass the test. Suppose that it already has: in 2014 a
chatterbot called Eugene Goostman fooled ten out of thirty judges at a Royal Society
event in five minutes’ conversation during which it pretended to be a 13-year-old
Ukrainian boy. What should we conclude about that machine? If the winner were
a traditional AI program (as Eugene is), the computational functionalist would con-
clude that Strong AI had been vindicated, and the program was truly thinking by
virtue of running the right program. Other functionalists would argue that such a
traditional rule-based program never could pass the test, but that other kinds of
machine might, and these would then be truly thinking. Others would insist that
whatever the machine is doing, and however well it does it, it is still not really thinking
like a human does. In other words, it is only pretending to think, or behaving as if it is
thinking. An alternative is to deny that there is any distinction between ‘real’ thinking
and ‘as-if ’ thinking, a denial that is perhaps in the spirit of Turing’s original conception.
The Turing test concerns the ability to think, but all its problems and insights are
paralleled in the even trickier question: could a machine be conscious?
COULD A MACHINE BE CONSCIOUS?
Could a machine be conscious? In other words, is there (or could there ever be)
‘something it is like to be’ a machine? Could there be a world of experience for the
machine?
‘We must be mysterians’, says American philosopher Jesse Prinz. ‘The problem
isn’t that it would be impossible to create a conscious computer. The problem is
that we cannot know whether it is possible’ (2003, p. 111).
‘We have known the answer to this question for a century’, says Searle.
The brain is a machine. It is a conscious machine. The brain is a biological
machine just as much as the heart and the liver. So of course some
machines can think and be conscious. Your brain and mine, for example.
(1997, p. 202)
This sharpens up our question, because what we really mean to ask is whether
an artificial machine could be conscious; whether we could make a conscious
AM I A MACHINE?
FIGURE 12.7 • The trick, whether you are putting
a computer to the Turing test or
playing the imitation game, is to
know which questions to ask.