290 | 27 THE TURING TEST—fROm EVERy ANGlE
given an arithmetical problem to solve, it should ‘deliberately introduce mistakes in a manner
calculated to confuse the interrogator’, to avoid being identified by its own ‘deadly accuracy’.
The best strategy for the machine, Turing said, is ‘to try to provide answers that would naturally
be given by a man’. The human contestant’s goal (assuming it corresponds to the woman’s goal in
the man-imitates-woman game) is ‘to help the interrogator’ and ‘the best strategy . . . is probably
to give truthful answers’.^11
Several commentators regard Turing’s three-player imitation game as in essence a two-player
game, in which an interrogator interviews a single hidden contestant, which may be a human
or a machine. However, there is no reason to say that Turing believed his three-player game
to be reducible to a two-player game: indeed, there is reason to say the opposite. In 1952, in a
BBC radio broadcast entitled ‘Can automatic calculating machines be said to think?’, Turing
described a two-player version of his test, in which members of a jury interview several contest-
ants one by one, some humans and some computers. He pointed out a difficulty for this version:
the jury, in order to avoid the embarrassing mistake of misidentifying a machine as a human
being, might simply say ‘ “It must be a machine” every time without proper consideration’.^12
Results in the annual Loebner Prize Contests in Artificial Intelligence, which until 2004 (unwit-
tingly) followed this version of the game, show that Turing was perceptive. In the 2000 contest,
for example, members of the jury judged a human as a machine ten times, but did not judge
any machine as a human being; and in the 2003 contest they judged a human as ‘definitely a
machine’ four times, but did not judge any computer as ‘definitely a human’.^13
Crucially, Turing made it clear that doing well in the game is not necessary for intelligence; a
thinking machine might fail his test.^14
what Turing didn’t say
Why did Turing propose replacing the question ‘Can machines think?’ with the question ‘Can
machines do well in the computer-imitates-human game?’? What links success in the game to
thinking? There are at least three very different answers to these questions.
First, the standard answer: Turing was a behaviourist. On this reading, the imitation game
tests whether a machine contestant can behave in a way that is indistinguishable from a ‘think-
ing’ human being—and there is nothing more to thinking than such behaviour. If it walks like
a duck and quacks like a duck, it just is a duck. Many 1950s commentators assumed that Turing
was a behaviourist, and some objected that behaviourism results in ‘the drastic redefinitions of
common words’. Theorists writing today typically follow suit, claiming that the Turing test is
‘the first operational definition of machine intelligence’ and that Turing’s criteria for thinking
are ‘purely behavioral’.^15 The reason to think that Turing was a behaviourist is principally that
in the 1940s and 1950s this approach was popular as a way of making the mind a subject of
scientific study. ‘Operationalizing’ a phenomenon to be studied is also common practice within
science: a psychologist, for example, might define anger in terms of how loudly a subject shouts.
So, commentators assumed, Turing operationalized intelligence in terms of how well a machine
does in the imitation game.
The second way of linking the imitation game to thinking is as follows: success in this game
provides evidence—but no guarantee—that the inner states and processes of a machine con-
testant are (computationally) similar to the mental states and processes of a human being. If it
walks like a duck and quacks like a duck, it’s likely that it has the innards of a duck. The reason