The Routledge Handbook of Consciousness

(vip2019) #1
Robot Consciousness

is the ability to converse like a human. Specifically, Turing proposed that we should attribute
thinking (and other mental states) to a device insofar as it performs well at the Imitation Game.
Turing’s variant of this parlor game involves two players, a human and a machine, in contact with
a judge only via a teletype system—in essence, texting. The judge would ask questions with the
goal of distinguishing human from contrivance. If the latter could fool the judge into thinking
that it was the human a significant amount of the time, we would have about as much reason to
attribute mental states to it as we have with regard to our fellow humans. This, you will recall, is
the kind of test that Descartes thought no contrivance could pass on the grounds that our speak-
ing and reasoning repertoire is unbounded. But computer scientists had already shown in some
sense how to elicit a boundless repertoire from finite mechanisms.
With research advancing rapidly, it seemed that the creation of devices exhibiting viva voce
behavior would not be far off. One important side note here is that by the 1960’s, model-
ers worked less and less with the kind of basic machine code (e.g., 1’s and 0’s) discussed by
Turing and more with high-level programming languages that had been implemented with
that code. These artificial languages enabled researchers to focus on programming in sen-
tence-like representations (e.g., ‘on<ball, table>.’) and rules (e.g., ‘if on<x, y>, remove x’) for
making inferences from, and responding to, them. Soon, researchers needed to know little to
nothing about the lower-level implementation details. All that mattered was the program. In
fact, because a given program could be implemented by various machine architectures (e.g.,
Turing’s or von Neumann’s), and each of these could be realized by various physical substrates
(e.g., McCulloch-Pitts neurons, vacuum tubes, transistors, gears and pistons, and, we shall see,
people), many came to see implementation as only of ancillary concern, both in computers
and in humans. The proper level for understanding and implementing conscious minds was
the level of programs.
One early system that seemed promising with regard to viva voce behavior was Shank and
Abelson’s (1977) Script Applier Mechanism (SAM), which had a human-like ability to infer
information only implicit in conversational context. For instance, told that Jonah dined at
McDonald’s, we might draw upon our generic knowledge of typical sequences of events (e.g.,
types of dining experience) to infer that Jonah retrieved his food from the counter, seated him-
self, and left no tip. SAM could infer these things as well, and even do so in different languages.
Naturally there were those who felt that SAM understood language or was at least a precursor to
the devices that would.
Enter Searle (1980) who, in perhaps the most widely cited philosophy essay of the 20th
century, argued that this entire framework was wrongheaded. His simple thought experiment,
termed the Chinese Room, is quite reminiscent of Leibniz’ Mill. He imagines a next-generation
SAM program that can achieve human-level performance on reading and question answering
for a language (Chinese) that he, Searle, does not comprehend. But instead of imagining step-
ping into a mill-containing low-level brainware, he imagines stepping into a room containing
SAM’s high-level software, a vast set of syntactic expressions and inference rules, all in paper and
book form. Searle’s claim, in essence, was that by using the SAM program to process and manip-
ulate Chinese expressions (i.e., the story input, the generic knowledge, and the subsequent
questions about the story) he would become just another of the many possible implementations
of the SAM program. Yet he would never, by virtue of this fact, comprehend a word of Chinese.
He would only be conscious of following rules for manipulating expressions that were meaning-
less to him (squiggles). Now if he, in implementing SAM, does not thereby come to understand
a word of Chinese, neither, he contended, would any other device (e.g., a so-called electronic
brain) implementing SAM. Indeed, no matter what subsequent program is put forward, Searle
could repeat the thought experiment with the same result. Running a program thus does not

Free download pdf