292 | 27 THE TURING TEST—fROm EVERy ANGlE
the human, the machine has ‘musical intelligence’. In the ‘moral’ Turing test an interrogator
interviews, or judges descriptions of the actions of, a machine and a human contestant solely
on issues of morality; if the machine is misidentified as a human, it is a moral being. (In this
game the machine must not be excessively virtuous, as this might give it away.) According to
the ‘Turing triage test’, if a judge, forced to choose which of a human and a machine contestant
to destroy, faces the same moral dilemma as if both were human, then the machine is a person.
Some theorists even propose Turing’s game as a way of testing identity. If after your ‘death’ we
activate a simulation of your brain in a computer, would that simulation be you? Ray Kurzweil,
a Google director of engineering who bet $20,000 that a machine would pass the Turing test
by 2029, says: run a (two-player) imitation game—if the interrogator cannot tell the difference
between talking to the simulation and talking to you, you are in the computer.
These games are mostly thought experiments. The best-known actual experiment is Hugh
Loebner’s competition, which now offers a silver medal (and $25,000 in 2016) to a machine
fooling half the judges after 25 minutes of questioning—and a yet-to-be-awarded grand prize
and gold medal for the first program to succeed in an imitation game in which the interrogator
communicates with contestants via audio-visual inputs as well as text inputs. New competitions
regularly appear; for example, in 2014 the XPRIZE Foundation, which designs and funds inno-
vation competitions ‘aimed at capturing the imagination of the public, spurring innovation,
and accelerating the rate of positive change across the globe’, announced the ‘A.I. XPRIZE’—a
‘modern-day Turing test to be awarded to the first A.I. to walk or roll out on stage and present
a TED talk so compelling that it commands a standing ovation’.^21 Little serious money or sci-
ence has gone into developing machines to take part in such tests, however. (This is not limited
to Turing-style AI competitions; in 2013 the British Computer Society’s Machine Intelligence
Competition for systems showing ‘progress towards machine intelligence’ was cancelled due to
‘insufficient suitable entries’.^22 )
In actual experiments the winning ‘chatbots’ (computer programs that converse in natural
language) are simple programs. For example, in a 2012 test an interrogator typed ‘Epsom, home
of the derby. yourself ?’ to state her home town and ask the contestant to do likewise; the pro-
gram Eugene Goostman responded with ‘My guinea pig says that name Derby sounds very nice’.
(Simulating a Ukrainian 13-year-old is intended to explain away such odd and ungrammatical
outputs.) This was the program announced as a ‘breakthrough’ in AI on the sixtieth anniversary
of Turing’s death. Unsurprisingly, after the initial hype surrounding Eugene Goostman, the
media verdict was: if this program can pass Turing’s test then the test is ‘no longer as relevant as
we first thought’. However, Eugene Goostman didn’t pass the test (see Chapter 25). The organ-
izers set the threshold for passing as follows: ‘If a computer is mistaken for a human more than
30% of the time during a series of 5-minute keyboard conversations it passes the test’.^23 This
confuses Turing’s prediction of how far AI might come by roughly the turn of the century with
the rules of his game. He made the threshold clear: the interrogator must ‘decide wrongly as
often [in the computer-imitates-human game] as he does when the game is played between a
man and a woman’ (see the section ‘A little experiment’). In 1952 Turing said that this would
take at least 100 years.
Turing-style tests are also used to test the believability of virtual characters and the photo-
realism of computer graphics. In one non-verbal test, a human subject interacts with a virtual
character and decides whether it is controlled by a human or a machine, based on how the ‘eyes’
move in response to the subject’s gaze. In the ‘enriched’ Turing test, female subjects in a speed-
dating experiment interacted twice with a virtual character; although they were told that the