294 | 27 THE TURING TEST—fROm EVERy ANGlE
their own: those who deceive the interviewers into thinking that this is their own ideology are
said to understand it. In the ‘Turing litigation game’, an interrogator interviews two hidden con-
testants, one the plaintiff and the other the defendant, with the aim of finding out whether the
defendant is guilty; proponents of this test say that it is cheaper and less cumbersome than cur-
rent legal procedure. It is also often assumed that the fictional Voigt–Kampff empathy test for
androids—from the story ‘Do androids dream of electric sheep?’ and the film Blade Runner—is
based on the Turing test. The test also appears in numerous cartoons (including several Dilbert
cartoons), artworks (including ‘The original automatic confession machine: a Catholic Turing
Test’), novels, and short stories—and on tee-shirts and baseball caps (Fig. 27.2).
One Turing-style test is now widespread. In a CAPTCHA (Completely Automated Public
Turing Test to Tell Computers and Humans Apart) the judge is a computer whose task is to
tell whether a single contestant is a human or a computer. (CAPTCHAs are sometimes called
‘reverse’ Turing tests.) When you must identify a number in a distorted image before you can
enter a chat room, vote, or use a credit card online, you are a contestant in a CAPTCHA.
Bashing the test
Several commentators have argued that Turing did not intend his imitation game as a test of
intelligence, and that he would have been amused, even horrified, at the game’s central role in
AI. Marvin Minsky said recently that the Turing test is a ‘joke’ and that Turing ‘never intended
it as the way to decide whether a machine was really intelligent’. Aaron Sloman claims that
Turing was ‘far too intelligent to do any such thing’ and that this widespread misinterpretation
has led to ‘huge amounts of wasted effort’ discussing the purely ‘mythical’ Turing test. Likewise,
according to Drew McDermott, all that Turing wanted to do was to ‘shake people’s intuitions
u p ’.^25 Turing’s own words concerning what he called his ‘imitation tests’ make it clear, however,
that he did intend the game as a test of intelligence. He said ‘I would like to suggest a particular
kind of test that one might apply to a machine’ and he described the question ‘Are there imagin-
able digital computers which would do well in the imitation game?’ as a ‘variant’ of the question
‘Can machines think?’. He certainly seemed to be serious, remarking ‘Well, that’s my test. Of
course I am not saying at present either that machines really could pass the test, or that they
couldn’t. My suggestion is just that this is the question we should discuss’.^26
The Turing test has had a hard ride, and—even though criticisms of the test reduce to a
handful of unsuccessful arguments—this will probably continue. Critics have described the
Turing test as ‘virtually useless’, ‘obsolete’, and ‘impotent’, and machines that do well in the game
as ‘dead ends in artificial intelligence research’. Some say that ‘[a]dherence to Turing’s vision . . .
is . . . actively harmful’ and that ‘Turing’s legacy alienates maturing subfields’. In the avalanche
of papers celebrating Turing’s centenary in 2012, the same objections appear; critics claim that
the time has come to ‘bid farewell to the Turing Test’.^27
Some of this criticism stems from the fact that the test is a criterion of human-level (or
human-like) intelligence in machines. Critics claim that trying to build artificial intelligence
by imitating human beings is to ignore the essence of intelligence in favour of an emphasis on
one parochial example. On this view, the Turing test focuses AI research on uniquely human
behaviour and so is ‘a tragedy for AI’; the game is ‘testing humanity, not intelligence’.^28 What
we need instead, critics say, is a test of ‘general’ or ‘universal’ intelligence (such as the ‘any-
time intelligence test’). However, this criticism of Turing is unfounded: the book you are now