298 | 27 THE TURING TEST—fROm EVERy ANGlE
combined with “big data” ’—in contrast to ‘the fluidity of the human mind’.^44 So do we need
yet another holy grail for AI? Some have suggested that AI’s ‘ultimate test’ is really mahjong!^45
Media reports also claimed that AlphaGo was distinctly different from previous programs.
Yet Turing himself investigated artificial neural networks (see Chapter 29) and wanted to build a
machine that could learn autonomously from experience (see Chapter 30). Anthony Oettinger,
who wrote the first AI programs to include learning, was much influenced by Turing’s views
on machine learning.^46 Oettinger’s programs were written for the EDSAC computer at the
University of Cambridge, the second electronic stored-program computer to run, in May 1949.
In 2016 an editorial in The Times claiming that DeepMind’s computer was ‘not programmed
how to play’ Go but rather ‘taught itself ’ (and that it can ‘use intuition’ and ‘think’) drew a swift
reaction from a scientist who had actually worked on the EDSAC.^47 Norman Sanders said that
DeepMind’s computer ‘did not learn to teach itself; it was programmed to do so’. There is ‘no dif-
ference in principle’, he claimed, between DeepMind’s machine and the EDSAC; the difference
is ‘just that today’s processing capacity matches the requirements’ of chess and Go.^48
Oettinger said that one of his programs could pass a restricted Turing test.^49 Likewise,
according to one of the IBM scientists working on Watson, ‘if in the Turing test you were asking
people Jeopardy! questions and we couldn’t tell who was who, then we’ve actually passed the
Turing test’.^50 If a machine succeeded, though, in an unrestricted Turing test—played several
times, with impeccable interrogators—would this suffice to show that the machine can think?
No, say many opponents of Turing’s test: machines that do well in a full-scale imitation game
may nevertheless lack some element crucial for thought.
The X factor
For most critics, this essential element is consciousness. Ken Jennings, one of the losing human
contestants in the Jeopardy! match, said that he felt ‘obsolete’—‘it was friggin’ demoralizing. It
was terrible’. On Searle’s view, if Watson had lost, the computer would not have felt obsolete,
or indeed felt anything; ‘in order to get human intelligence, you’ve got to be conscious’, Searle
says. Some AI sceptics say that AlphaGo didn’t even win the match against Lee; a computer can’t
‘win’ at anything, it was claimed—not until it ‘can experience real joy in victory and sadness in
defeat’.^51 This is, the critics say, why succeeding in the Turing test is not a criterion for thinking.
The imitation game can’t test for consciousness.
In the 1950s several theorists claimed that machines cannot be conscious. This is not to deny
that a machine can have what Donald Michie, another Bletchley Park colleague of Turing’s,
called ‘operational awareness’—the ability to register inner states. What the machine cannot do,
it was said, is experience the ‘feel’ of a taste, colour, or emotion; according to Geoffrey Jefferson,
for example, no machine could feel grief ‘when its valves fuse’.^52 One reply to this is that we can
imagine a machine answering questions on grief (or other emotions, or tastes or colours) in a
way indistinguishable from a human being—and how could it do so without having felt grief?
On this view, it may not be possible for a machine to succeed in Turing’s game without being
conscious. But if it does succeed, that’s evidence it is conscious!
Yet, some have countered, we can also imagine a being—a philosophical ‘zombie’, a non-
conscious being otherwise indistinguishable from a human—that does exactly this. The zom-
bie example is intended to show that passing the Turing test is not a criterion for thinking, but
even this objection is not the last word on the test. For the zombie objection to the test to work,