Consciousness

(Tuis.) #1

  • seCtIon FoUR: eVoLUtIon


PRoFILe 12.2
John Searle (b. 1932)
John Searle is Professor Emer-
itus of Philosophy at Berkeley,
where he has been since 1959.
He says he is, and always has
been, ‘interested in everything’.
As a student at the University
of Wisconsin, he resigned as
President of the student board
to study harder. He then went to Oxford, where he spent
three years as a Rhodes scholar and became a don at
Christ Church. He has written books on language, ra-
tionality, and consciousness, including The Rediscovery
of the Mind and Seeing Things as They Are: A Theory
of Perception, and his books have been translated into
more than twenty languages. His Chinese Room thought
experiment is probably the best-known argument against
the possibility of ‘Strong AI’, a term that he invented. He
says that ‘brains cause minds’ and argues for ‘biological
naturalism’.

important implications for consciousness. In an echo of one of
the criteria for animal consciousness we considered in Chap-
ter 10, it makes language central.
Searle took as his starting point Roger Schank’s programs
that used scripts to answer questions about ordinary human
situations, such as having a meal in a restaurant. These were
firmly in the GOFAI tradition, manipulating symbols according
to formal rules, and incorporating representations of relevant
knowledge. Supporters of strong AI claimed that these pro-
grams really understood the questions and their answers. This
is what Searle attacked.
‘Suppose that I’m locked in a room and given a large batch
of Chinese writing. Suppose furthermore (as is indeed the
case) that I know no Chinese, either written or spoken’, begins
Searle (1980, p. 417–418). Inside his room, Searle has lots of
Chinese ‘squiggles’ and ‘squoggles’, together with a rule book
in English. People outside the room pass in two batches of
Chinese writing which are, unbeknown to Searle, a story, in
Chinese of course, and some questions about the story. The
rule book tells Searle which squiggles and which squoggles
to send back in response to which ‘questions’. After a while he
gets so good at following the instructions that from the point
of view of someone outside the room his ‘answers’ are as good
as those of a native Chinese speaker. He next supposes that
the outsiders give him a story and questions in English, which
he answers these just as a native English speaker would  –
because he is a native English speaker. So his answers in both
cases are indistinguishable. But there is a crucial difference.
In the case of the English stories, he really understands them. In the case of the
Chinese stories, he understands nothing.

So here we have John Searle, locked in his room, acting just like a computer run-
ning its program. He has inputs and outputs, and the rule book to manipulate the
symbols, but he does not understand the Chinese stories. The moral of the tale is
this: a computer running a program about Chinese stories understands nothing
of those stories, whether in English or Chinese or any other language, because
Searle has everything a computer has, and he does not understand Chinese.

Searle concludes that whatever purely formal principles you put into a computer,
they will not be sufficient for real understanding. Another way of putting it is that
you cannot get semantics (meaning) from syntax (rules for symbol manipulation).
Any meaning or reference that the computer program has is in the eye of the user,
not in the computer or its program. So Strong AI is false.
The Turing test is also challenged because in both languages Searle claims he
passes the test perfectly, but in English he really understands while in Chinese he
doesn’t. Note that, for Searle, this shows that there is something extra that he has
and the computer does not. This something is real (as opposed to as-if) intention-
ality (the capacity to be about something). He concludes that ‘Whatever it is that

‘no program [. . .]


is sufficient for


intentionality’


(Searle, 1980, p. 424)

Free download pdf