268 | 25 INTEllIGENT mACHINERy
the human brain, that not until computers possessed a memory capacity of around a gigabyte
would human-speed AI start to become feasible,^20 and it was about 15 years after Freddy was
switched off for the last time that the Cray-2 supercomputer offered a gigabyte of RAM.
In more recent times, Turing’s ideas also formed the inspiration for the famous series of robots
built by Rodney Brooks’ group in the Artificial Intelligence Laboratory at the Massachusetts
Institute of Technology (MIT).^21 The MIT robots Cog and Kismet were, like Freddy, faltering
first steps towards child machines. Kismet is described in Chapter 30, where some of the philo-
sophical difficulties involved in building an artificial infant are explored.
Testing for thought
How can researchers tell if a computer—whether a humanoid robot or a disembodied
supercomputer—is capable of thought?
This is not an easy question. For one thing, neuroscience is still in its infancy. Scientists do
not know exactly what is going on in our brains when we think about tomorrow’s weather or
plan out a trip to the beach—let alone when we write poetry or do complex mathematics in our
minds. But even if we did know everything there is to know about the functioning of the brain,
we might still be left completely uncertain whether entities without a human (or mammalian)
brain could think. Imagine that a party of extra-terrestrials find their way to Earth and impress
us with their mathematics and poetry. We discover they have no organ resembling a human
brain: inside they are just a seething mixture of gases. Does the fact that these hypothetical
aliens contain nothing like human brain cells imply that they do not think? Or is their math-
ematics and poetry proof enough that they must think—and so also proof that the mammalian
brain is not the only way of doing whatever it is that we call thinking?
Of course, this imaginary scenario about aliens is supposed to sharpen up a question that
is much nearer to home. For ‘alien’, substitute ‘computer’. When computers start to impress us
with their poetry and creative mathematics—if they don’t already—is this evidence that they
can think? Or do we have to probe more deeply, and examine the inner processes responsible
for producing the poetry and the mathematics, before we can say whether or not the computer
is thinking?
Deeper probing wouldn’t necessarily help much in the case of the aliens—because ex hypoth-
esi the processes going on inside them are nothing like what goes on in the human brain. Even
if we never managed to understand the complex gaseous processes occurring inside the aliens,
we might nevertheless come to feel fully convinced that they think, because of the way they lead
their lives and the way that they interact with us. So does this mean that in order to tell whether
a computer thinks we only have to look at what it does—at how good its poetry is—without
caring about what processes are going on inside it?
That was certainly what Turing believed. He suggested a kind of driving test for thinking,^22
a viva voce examination that pays no attention at all to whatever causal processes are going on
inside the candidate—just as the examiner in a driving test cares only about the candidate’s
car-handling behaviour, and not at all about the nature of the internal processes that produce
that behaviour. Turing called his test the ‘imitation game’, but nowadays it is known universally
as the ‘Turing test’. There is more on the Turing test in Chapter 27.
Turing’s iconic test works equally well for computers or aliens. It involves three players: the
candidate and two human beings. One of the humans is the examiner or judge; the other—the