Consciousness

(Tuis.) #1

Chapter


Twelve


The evolution of machines


Kurzweil (1999) retaliates that ‘It is true that machines can’t solve Gödelian impos-
sible problems. But humans can’t solve them either’ (p. 117). We can only estimate
them, and so can computers, including quantum computers. And Turing himself
pointed out that we humans are notoriously error-prone and might even revel in
our limitations. Could machines revel in their limitations? asks Hofstadter (2007):
could a machine be confused? Could it know it was confused?


We do not know the answer, but it seems that none of these arguments proves
the impossibility of building a conscious machine.


If there are some things that machines can never do, we are far from knowing
what they are and why.


PRACTICE 12.2
IS THIS MACHINE CONSCIOUS?

As many times as you can, every day, ask: ‘Is this machine conscious?’
This exercise, like the one about animal consciousness, is directed out
beyond yourself. Whenever you use a phone, laptop, or TV, or depend
on air traffic control or satellite navigations systems, ask ‘Is this machine
conscious?’ You can do the same with fridges, cars, and electronic
games, or indeed anything you like. Explore your own intuitions. Can you
discern the reasons why you are more tempted to attribute some inkling of
consciousness to some machines rather than others?

None of the general arguments considered so far has demonstrated that a
machine cannot be conscious. Two further arguments are much more specific
and much more contentious.


THE CHINESE ROOM


Among Turing’s list of arguments against machine thinking is ‘The argument from
consciousness’. This, he says, might be used to invalidate his test, because ‘the
only way by which one could be sure that a machine thinks is to be the machine
and to feel oneself thinking’ (Turing, 1950, p. 446). Even if the machine described
its feelings, we should take no notice. He rejects this argument on the grounds
that it leads only to solipsism – the view that we can never know anything about
other minds than our own – and in this way defends his test. Yet this argument
was not to be so easily defeated. Thirty years later, it gained its most powerful
advocate in the philosopher John Searle, with his famous Chinese Room thought
experiment.


Searle proposed the Chinese Room as a refutation of Strong AI – that is, the claim
that implementing the right program is all that is needed for understanding.
It is most often used to discuss intentionality and meaning with respect to AI,
but many people, including Searle himself, believe that the Chinese Room has

Free download pdf