274 | 25 INTEllIGENT mACHINERy
example is in fact harmless. Turing’s interest was the real computational world, and this hypo-
thetical database could not pass the Turing test in the real world—only in a sort of fairyland,
where the laws of the universe would be very different. In the real world, there might simply not
be enough atoms in existence for this huge store of information to be constructed—and even
if it could be, it would operate so slowly (because of the vast numbers of stored conversations
that must be searched) as to be easily distinguishable from a human conversationalist. In fact,
the judge and the foil might die before the database produced more than its first few responses.
Searle’s circle: the Chinese room
Another famous but misguided criticism of the Turing test is by the philosopher John Searle.
Searle is one of AI’s greatest critics, and a leading exponent of the view that running a computer
program can never be sufficient to produce thought. His objection to the Turing test is simply
stated: according to Searle, entities that d o n’ t think can pass the test—and the entities he has in
mind are computers.^46
Let us imagine that a team in China, say, produces a computer program that successfully
passes the Turing test in Chinese. Searle ingeniously proposes an independent method for test-
ing whether running this program really produces thought. The method is to run the program
on a human computer and then ask the human, ‘Since you are running the program—does it
enable you to understand the Chinese?’ (See Chapters 7 and 41 for more about human comput-
ers.) Searle imagines himself as the human computer. He is in a room that is provided with
many rulebooks containing the program written out in plain English, and he has an unlimited
supply of paper and pencils. As with every computer program, the individual steps in the pro-
gram are all simple binary operations that a human being can easily carry out using pencil and
paper, given enough time.
In Searle’s Turing test scenario, the judge writes his or her remarks on paper, in Chinese
characters, and pushes these into the room through a slot labelled INPUT. Inside the room,
Searle painstakingly follows the zillions of instructions in the rulebooks and eventually pushes
more Chinese characters through a slot labelled OUTPUT. As far as the judge is concerned,
these symbols are a thoughtful and intelligent response to the input. But when Searle, a mono-
lingual English speaker, is asked whether running the program is enabling him to understand
the Chinese characters, he replies ‘No, they’re all just squiggles and squoggles to me—I have no
idea what they mean’. Yet he is doing everything relevant that an electronic computer running
the program would do: the program is literally running on a human computer.
This is a description of Searle’s renowned ‘Chinese room’ thought experiment. He claims
that the thought experiment shows that running a mere computer program can never produce
thought or understanding, even though the program may pass the Turing test. However, there
is a subtle fallacy. Is Searle, in his role as human computer, the right person to tell us whether
running the program produces understanding? After all, there is another conversationalist in
the Chinese room—the program itself, whose replies to the judge’s questions Searle delivers
through the output slot. If the judge asks (in Chinese) ‘Please tell me your name’, the program
responds (in Chinese) ‘My name is Amy Chung’. And if the judge asks ‘Amy Chung, do you
understand these Chinese characters’, the program responds ‘Yes, I certainly do!’.
Should we believe the program when it says ‘Yes, I am able to think and understand’? This
is effectively the very same question that we started out with—is a computer really capable of