Consciousness

(Tuis.) #1

the brain does to produce intentionality,
it cannot consist in instantiating a pro-
gram since no program, by itself, is suffi-
cient for intentionality’ (1980, p. 424). The
something is also, he claims, subjective,
and this is where the argument becomes
directly relevant to consciousness.


Reaction to the Chinese Room has
been ferocious for decades. Searle
(1980) himself listed six replies and
rebutted them in turn, and many more
followed. Among them, the ‘systems
reply’ argues that while Searle himself
might not understand Chinese, the whole distributed cognitive system con-
sisting of him and the room does. Searle responds that he could internalise
all the rules and do the manipulations in his head and he still wouldn’t under-
stand Chinese. The ‘robot reply’ suggests putting a computer into a robot and
letting that interact with the outsiders, claiming that a machine which could
interact with the world the language refers to would understand, but Searle
responds that adding a set of causal relations with the outside world makes
no difference because you could put him inside the robot and he would still
just be manipulating symbols and would still not understand Chinese. The
‘brain simulator reply’ proposes a program that simulates the actual sequence
of neuron firings in a real Chinese brain. Searle responds that as long as this
program only simulates the formal properties of the brain, it misses the crucial
causal properties that allow brains to cause minds: the properties that cause
consciousness and intentional states.


The argument started as a refutation of Strong AI. Have things changed with the
advent of connectionism and behaviour-based robotics? The robot reply was a
step in this direction because it suggested that interaction with the real world
was essential for understanding or intentionality. As McGinn puts it, ‘Internal
manipulations don’t determine reference, but causal relations to the environ-
ment might’ (1987, p. 286). Another way of saying this is that the symbols must
be grounded in the real world, because it is only through symbol grounding
that we humans come to understand and have intentional states (Harnad, 1990;
Velmans, 2000). Similarly, Chalmers (1996) points out that a computer program
is a purely abstract object, while human beings are physically embodied and
interact causally with other physical objects. The bridge between the abstract
and the concrete, he says, lies in implementation. Having the right program is
not sufficient for consciousness, but implementing it is. Ron Chrisley (2009)
promotes a ‘moderate AI’ position: that modelling necessarily uses properties
shared by AI systems and brains, but instantiating these common properties is
not sufficient for consciousness. Something more, such as symbol grounding or
biology, might be needed. Moderate AI, he says, is immune to the Chinese Room
argument.


Dennett presses a version of the systems reply. The problem with this thought
experiment, he suggests, is that Searle misdirects our imagination by luring us


Searle’s Chinese Room
article is a ‘religious
diatribe against AI,
masquerading as a serious
scientific argument’

(Hofstadter, in Searle, 1980,
p. 433)

FIGURE 12.10 • Searle asks us to imagine
that he is locked in a room.
People pass in squiggles and
squoggles. He looks up what
to do in his rule book and
passes out more squiggles and
squoggles. Unbeknown to him
the symbols being passed in are
Chinese stories and questions,
and the symbols he passes
out are answers. To the people
outside he seems to understand
Chinese, but he is like a
computer, manipulating symbols
according to rules, and he does
not understand a word.
Free download pdf