The Turing Guide

(nextflipdebug5) #1

PROUDfOOT | 297


(Fig.  27.3). This time The Economist was impressed, saying that ‘defeating a grandmaster at
chess was child’s play compared with challenging a quiz show famous for offering clues laden
with ambiguity, irony, wit and double meaning as well as riddles and puns’. Kurzweil too said
that Watson is a ‘stunning example of the growing ability of computers to successfully invade
[a] supposedly unique attribute of human intelligence’—analysing language and using symbols
to stand for ideas.^37 IBM has since teamed up with the XPRIZE Foundation to offer the IBM
Watson AI XPRIZE to the machine that gives the best TED talk at TED2020. IBM brings seri-
ous money; prizes will total $5 million.^38
Critics, however, say that Watson is no closer than Deep Blue to genuine thinking. John
Searle, for example, argued that Watson demonstrated ‘a huge increase in computational power
and an ingenious program’, but that these ‘do not show that Watson has superior intelligence,
or that it’s thinking’:^39


[Watson] is merely following an algorithm that enables it to manipulate formal symbols. Watson
did not understand the questions, nor its answers, nor that some of its answers were right and
some wrong, nor that it was playing a game, nor that it won—because it doesn’t understand
anything.


Searle’s famous Chinese room argument against the Turing test, which he also uses against
Watson, is discussed in Chapter 25.
According to AI optimists, the ‘most momentous milestone’ since Deep Blue’s defeat of
Kasparov, and ‘a landmark moment’ for AI, was the program AlphaGo’s 2016 defeat of Lee
Se-dol, one of the world’s top three Go players.^40 The program’s creators, Google DeepMind,
said that Go has more possible positions than ‘there are atoms in the universe’ and as a result is
‘a googol times more complex than chess’—making the game impossible to solve by brute force
search.^41 AlphaGo utilizes ‘deep’ neural networks that are intended to capture something of the
structure of the human brain, and improves its performance by playing thousands of games
against itself. According to the lead researcher behind AlphaGo, the program can ‘understand’
Go, and enthusiasts said that it showed an ability ‘eerily similar to what we call intuition’.^42
Critics, on the other hand, claimed that AlphaGo ‘no more understands the game of Go than
a robot mower understands the concept of a lawn’.^43 One commentator, for example, who had
thought a successful Go program would mean that AI is ‘truly beginning to become as good as
the real thing’, said after Lee’s defeat that AlphaGo seemed merely to use ‘learning algorithms


figure 27.3 Jeopardy!
Reprinted courtesy of
Jeopardy Productions Inc.
Free download pdf