Robot Consciousness
(typically) unique to that level, and so we invent a new vocabulary to describe them. And not all
of them involve sentences and rules.
Perhaps qualia too are high-level, multiply realizable properties. Indeed, the very possibility
of synthetic robot consciousness may just require that they are. This proposal, in any case, has the
great virtue of explaining why, when we view the brain in terms of low-level properties (neu-
rons, neurotransmitters, action potentials, etc.) not only do we see ‘nothing resembling a per-
cept’—no qualia—but we also have trouble imagining how there could be any qualia inhabiting
the system. Likewise, when we focus just on the hardware of a computer, it is hard to imagine
how it could hold a virtual world with a landscape full of avatars.
But Searle would here object that he is not viewing the system at a low, Leibnizian level, but
at a higher, programming level. But to be persuaded by his arguments about conscious mental
events, we must accept, contrary to fact, that the program is the highest possible level of activ-
ity in a computational system. Indeed, if Searle were to be a certain C++ program, he could be
entirely oblivious to the fact that his activities implement a certain type of virtual machine and,
even more surely, have no capacity to appreciate how that machine runs a complex Java program
that traffics in countless numerical data structures for tracking the 3D coordinates and proper-
ties of various coarse pixels, data structures that in turn realize a well-equipped avatar running
around in a particular Minecraft seed. The virtual world Searle realizes might, purely from his
low-level vantage, be for all intents and purposes cognitively closed. So perhaps where Searle’s cri-
tique goes wrong is in assuming that the highest level of abstraction for understanding any given
programmed device is the level of the program, the level of rules and sentences.
Consider, in addition, the many realistic computer models of physical systems that are being
run across various supercomputers as we speak. Many are realized using commercially avail-
able physics modeling programs such as Lawrence Livermore Lab’s LS-Dyna. At the highest
programming level, they traffic in sentences that specify co-ordinates of basic building blocks
(e.g., granular structures or polygon vertices) and rules constraining how the co-ordinates may
change. In this way, they provide for a kind of virtual clay that can be used to create models of
everything from tornados and hot springs to SUVs. The models are set in motion to see how
things will play out, much as one uses physical modeling media (plastic, metal, etc.) to create and
manipulate scale models. Indeed, these virtual models have many features widely taken to distin-
guish (roughly speaking) imagistic representations from their descriptive counterparts, including
open-ended inferential powers (build anything you like and poke it any which way, and read
off the results), a kind of universal character that is very much like what Descartes claimed no
machine could possess (Waskan 2017).
Now what if, in the human case, the better part of how brains produce conscious experiences
of our surroundings is to use the torrent of stimuli to our sense organs to form, through massive
amounts of lower-level information processing, a high-level internal model of what is going on
external to the brain. If that is the case, synthetic robot brains, even ones governed by programs
at some non-terminal level, may be as capable of this as their mushy counterparts. But could
endowing our robotic creations with such internal world models really make them conscious?
Might it give them qualia?
As always, the answer is far from straightforward, but one finds reasons for optimism by
returning to the beginning of our discussion, to Galileo. Recall his claim that some experienced
properties (colors, tastes, etc.) inhere only in us, whereas others (shape, motion, etc.) inhere in
external objects. But do the latter inhere only in those objects? Surely not. Cases like those sup-
plied by Descartes and Penfield seem to show that we may have experiences of those properties
(or as of them) even in their absence. The takeaway from Galileo may be then that the primary
properties we experience often also exist in a similar form external to our minds whereas the