The Turing Guide

(nextflipdebug5) #1

PROUDfOOT | 295


reading shows Turing’s broad range of interests in machine intelligence—there is no evidence
that he believed that human intelligence is ‘the final, highest pinnacle of thinking’, as critics
claim.^29 Moreover, the search for the universal ‘essence’ of intelligence appears wrong-headed.
Proponents of universal intelligence tests begin by defining intelligence, which Turing refused
to do—if the concept of intelligence is an emotional concept, as he claimed, intelligence does
not have an essence, any more than beauty does. Worse, the definitions these ‘essence’ theorists
offer are behaviourist: ‘general’ intelligence, they say, is the ability to achieve goals, produce
syntactically complex utterances, or answer highly complex questions. However, even if such
an ability is enough for intelligence, is it enough for thinking?
Several critics of the Turing test say that, even if human-level AI is a suitable target for
researchers, the test doesn’t help us to get there: the task of building a machine that an inter-
rogator will misidentify as human is too difficult. Indeed, it has been claimed that ‘the Turing
Test could be passed only by things that have experienced the world as we [humans] have’—
although this claim was withdrawn in the face of modern computer methods of analysing
extraordinarily large amounts of data.^30 However, many people raise this objection only
because they confuse the goal of human-level AI with much-hyped but unsuccessful strategies
taken to get there—such as Good Old Fashioned AI, which focused on ‘symbol systems’ with
a huge store of knowledge. Turing himself suggested another route: build and educate a ‘child
machine’ (see Chapter 30). This involves giving a machine a range of abilities, each of which
could be individually tested and incrementally improved—thus providing a route to human-
level AI.
Critics also complain that the Turing test is too easy, arguing that there is ‘no plausible devel-
opmental pathway from increasing chatterbot performance in the Turing test to genuine artifi-
cial intelligence’. On this view, the test merely encourages a programmer to use cheap tricks in
order to fool a judge, who may be convinced just because he/she is especially gullible. Critics
say that the test aims at ‘artificial stupidity’ rather than artificial intelligence, since the machine
contestant must hide its superhuman capacities (such as perfect typing skills).^31 However, the
Turing test is not easy; many people make this complaint only because they confuse the simple
programming strategies behind Loebner Contest chatbots with those necessary to succeed in
Turing’s much harder game. It is true that individual interrogators might be gullible, or simply
have a bad day, but this shows only that the test is not a one-off test: to obtain a convincing
result, the game must be played several times.^32 Also, the ‘artificial stupidity’ objection is mis-
guided, since disguising the machine is an unavoidable corollary rather than the aim of the test.
Turing said that a machine should not be punished for disabilities, such as an ‘inability to shine
in beauty competitions’, that are ‘irrelevant’ to whether the machine can think.^33 To avoid this,
the machine’s appearance is hidden. A machine’s ‘deadly accuracy’ at typing or mathematics is
an irrelevant ability and so must also be disguised.
Yet another criticism stems simply from the fact that the Turing test is a test. It is not a test
of intelligence that we need, some critics say, it is a (computational) theory of intelligence. This,
however, assumes that such a theory is possible—and if intelligence is an emotional concept
then such a theory is not possible.
The remaining complaint is that machines passing the Turing test would be of no practi-
cal use; they would be ‘intellectual statues’, critics say—more expensive but no smarter than
human labour. What we need instead, critics argue, are ‘intelligence amplifiers’—systems such
as Google and Siri, and even driverless cars.^34 This is to abandon AI’s grand goal of human-level
AI in favour of ‘narrow’ goals. This stance seems unduly negative, however: machines with the

Free download pdf