Jonathan Waskan
by itself produce an experience of understanding the meanings of the Chinese expressions; it
does not produce what might be termed comprehension qualia (Waskan 2011). The argument,
he thought, generalized to any other program designed to realize any mental state whatsoever,
whether perceptual, volitional, cognitive, or emotional.
In one clean swing, Searle seemed to have demolished the foundations of this massive (and
lucrative!) framework for explaining and duplicating conscious mental episodes. He claimed
to have shown (a) that programs do not produce or explain conscious episodes (at best they
simulate them much as computers simulate weather without being weather) and (b) that merely
passing behavioral tests like the viva voce test never justifies attributing conscious mental events
to a programmed contrivance.
3 Artificial Neural Networks
Searle’s work, like Leibniz’, highlights the fact that there is nothing about the entities and activi-
ties under consideration, in this case the application of rules to sentences, remotely indicative of
the production of qualia. He parts ways with Leibniz, however, in claiming that low-level brain
processes are important when it comes to qualia, that to replicate mental states one requires a
system with the same (as yet unknown) relevant causal powers as real neurons.
Recall that McCulloch and Pitts (1943) did initiate a promising program of research into
artificial neural networks (ANNs), simulated networks of simplified neurons. That program has
since bloomed in a thousand ways. Today ANNs underlie the kinds of deep learning that many
everyday contrivances such as smartphones and automated assistants utilize to support sophisti-
cated linguistic interactions. Devices that can pass the viva voce test appear closer than ever, and
it is entirely likely they will be realized through neural processing. But will Siri for instance,
ever really understand? Or see? Or feel? Searle would offer a qualified “no,” for the ANNs in
question are mere simulations of neurons implemented by traditional programming. He could
just as well carry out those programs and not be subject to any relevant qualia. However, neural
computing is no longer limited to simulations run on von Neumann devices. IBM, for instance,
has unveiled its SyNAPSE chip, containing 1 million artificial neurons and 256 million synaptic
connections.^2 If there is something about the causal powers of real neurons that produces mental
states, it is possible that SyNAPSE’s neurons have it. To this somewhat vague possibility, Searle
would not object.
4 The Limits of Searle’s Critique
Searle, we saw, is not just criticizing the idea that programs can produce conscious mental events.
He is also criticizing such behavioral tests for mindedness as viva voce tests. But let us imagine that
a robot with a fairly sophisticated body and a full complement of sensors has been endowed with
a program that makes it behave like a real human. Better still, imagine a robot constructed along
the lines suggested by Turing (1950)—namely, one with perceptual systems and programmed with
some amount of structure to facilitate language acquisition and reasoning, but that must otherwise
learn on its own. Suppose this robot, call it Eimer, is protected from the corrupting intellectual
influence of qualiaphiles like myself, but over time it nevertheless comes to say things like this:
“How do I know that my red is not your blue?”
“My experience of redness, that color I see, is produced in me somehow, but I don’t
see how it could possibly be explained by electronic states.”