Jonathan Waskan
A more recent argument that we never will solve the mystery of qualia has been offered by
Jackson (1982). He claims that even if a completed neuroscience supplied all of the physical
facts about the neural processes of perception, they would still be missing crucial facts such as
how red looks to us. Thus, facts about qualia are non-physical facts, perhaps having to do with
an ephemeral exhaust given off by neural processes (a view termed epiphenomenalism). Jackson
maintains that our inability to understand qualia stems from the fact that we evolved to know
only certain facts (which we term physical), the ones important to middle-sized creatures such
ourselves. But these are only a subset of the facts. The truly surprising thing, he thinks, is how
many facts about nature we do understand.
McGinn (1989) likewise argues that the truth about qualia is cognitively closed to limited
beings such as ourselves. While he thinks they somehow result from a natural, biological process
of evolution and development, an understanding of how this occurs may forever elude us. Just
as there are truths about nature closed to sea slugs and monkeys, the truth about consciousness
may be closed to us.
2 The Dawn of Robots
The discussion of robot consciousness became genuinely pressing with the advent of so-called
artificial brains (electronic computers) mid-way through the 20th century. Turing (1937) laid
the foundations for this endeavor about a decade earlier with his attempt to add precision
to the fuzzy colloquial notion of computation. He defined ‘computation’ as the sort of formal
symbol manipulation that can be automated by a hypothetical device (what we now call
a Turing machine) that engages in simple operations such as writing or erasing a ‘1’ from a
memory tape or moving the tape left and right. Turing showed that such devices could be
configured to perform any of the operations traditionally thought of as computations. One
could also create a device taking two sets of inputs, say from distinct memory tapes, the data
to be manipulated and the program for doing so. Thus, instead of a device built to perform
one and only one type of calculation (e.g., addition), one could, through finite mechanical
means, create a device with an effectively boundless capacity for formal reasoning, a universal
Turing machine as it is now called.
Soon after, McCulloch and Pitts (1943) showed that structures with the rudimentary powers
of neurons could be configured to carry out simple logical operations (e.g., reasoning with ‘or’
and ‘and’) and that a vast collection of these ersatz neurons could approximate a universal Turing
machine. This work helped inspire von Neumann to show, quite concretely, that this kind of
device could be created out of electronic components (e.g., vacuum tubes) (Boden 2006: 196).
Thus was born the modern programmable computer, the basic architecture of which, the von
Neumann architecture, is still in wide use today. This watershed event in human history very
quickly led to the creation of computers that could generate novel proofs of mathematical theo-
rems, converse about the manipulation of objects, and control robots as they navigated virtual
and real environments.
Turing saw straightaway that this research would have as its ultimate goal the creation of
artificial thinking machines. But could computing machines really think? And could they be
conscious? Turing (1950) thought that if we tried to answer these questions directly, we would
quickly get sidetracked into debating the meanings of our vaguely defined mental terms. He
thus sought to do for these terms what he did for ‘computation’—namely, define them (or rede-
fine them) in a way that renders them useful and precise. For instance, instead of asking whether
or not an electronic contrivance can think, we should ask whether it can exhibit the kinds of
behaviors that normally lead us to attribute thinking to other humans. One good measure here