PROUDfOOT | 291
to take this view of Turing is that, in setting out the concept of the Turing machine, he compared
the digital computer’s state—that is, where it is in the course of a computation—to a human
computer’s ‘state of mind’ (see Chapter 26). He hypothesized that the human brain is a digital
computer and he believed that ‘machines can be constructed which will simulate the behaviour
of the human mind very closely’.^16 In the 1950s it was said that ‘Turing and others have tried
to define thought processes in terms of formal machine states or logic’.^17 Taking this view, a
computer’s success in the imitation game is an indication that its processing resembles human
thinking.
The difficulty for both these readings is Turing’s own words: when he spoke explicitly about
the concept of intelligence, he did not reduce intelligence either to behaviour or to computa-
tion.^18 On the contrary, when introducing the 1948 chess-playing version of the game, he said
that the concept of intelligence is an ‘emotional concept’ and he spoke of the temptation to
‘imagine intelligence’ in a machine.^19 Emotional concepts are those concepts—such as beauty,
colour, and goodness—that we frequently say are ‘in the eye of the beholder’. If a painting looks
beautiful (to normal people in normal conditions), it is beautiful. Turing said that whether or
not an entity is intelligent depends in part on our reaction to it; our imagining intelligence in
another entity is crucial to that entity’s being intelligent. Taking this third view of the imitation
game, the game tests whether the interrogator will imagine intelligence in a machine contestant.
For more on these three interpretations of the Turing test, see Chapter 28.
New games
The Turing test has been—and surely will continue to be—modified, extended, diluted, and
transformed. This indicates its importance to the field of AI.
Some theorists think that it is impossible to have a disembodied thinking thing, and so they
propose the ‘total’ (or ‘robotic’) Turing test, in which a machine ‘must be able to do, in the real
world of objects and people, everything that real people can do, in a way that is indistinguishable
(to a person) from the way real people do it’.^20 In an even harder version of this test—the ‘truly
total’ Turing test—a system of these machines must be capable of generating human abilities
by themselves. Other theorists focus on how intelligence develops. In the ‘toddler’ Turing test
a machine must answer questions exactly as a 3-year old human would. (In yet another test, a
machine can think if it can rear a child!) Some proposed tests limit the interrogator’s questions:
in the ‘Feigenbaum test’ an expert judge must distinguish a computer expert from a human
expert in a specific field, and in the ‘tutoring test’ a machine contestant must teach as effectively
as a human tutor. Many of what are now called ‘Turing-style’ tests depart from Turing’s format.
In the ‘Pleming test’, for example, two identical machine contestants communicate with each
other, passively observed by a human judge; if this interaction looks like human communica-
tion, and even better shows signs of creativity, the machines can think. (This is a ‘fly on the wall’
or third-person Turing test.)
For Turing the imitation game is a criterion for thinking, whereas later theorists have pre-
sented it as measuring other abilities, including free will. In the ‘Lovelace test’ a machine is
genuinely creative if its designer cannot explain the machine’s behaviour in terms of its design
and knowledge store—only machines with an element of mystery pass this test. In the ‘Turing
test for musical intelligence’ a musician-interrogator improvises with two hidden ‘musicians’,
one human and the other a computer music system; if the interrogator is unable to identify