Consciousness

(Tuis.) #1

  • seCtIon FoUR: eVoLUtIon
    into imagining that a very simple table-lookup program could do the job, when
    really ‘no such program could produce the sorts of results that would pass the
    Turing test, as advertised’ (Dennett, 1991, p. 439). Complexity does matter  – so
    even if a hand calculator does not understand what it is doing, a more complex
    system, like one that passes the Turing test, could. He suggests that we should
    think of understanding as a property that emerges from lots of distributed qua-
    si-understandings in a large system (p. 439).
    We might go even further and reject Searle’s thought experiment (like the zombie
    argument or Mary the colour scientist we considered in Chapter 2) on the grounds
    that it instructs us to imagine something impossible. Searle claims that with only
    the Chinese symbols and his rule book (or even with the rules memorised and
    inside his head), he really could pass the Turing test without understanding a
    word of Chinese. But what if he couldn’t? It might turn out that symbol ground-
    ing, or learning by interactions with the real world, or something else again, is
    necessary for passing the test as well as for ‘really understanding’ a language. In
    this case, there are only two options. Either he does not have these necessities,
    and his symbol manipulations fail to convince the Chinese people outside, or
    he does, and that means he comes to understand Chinese in the process. Either
    way, the scenario Searle described in the original thought experiment might be
    impossible.
    Just as with Mary and zombies, there is no final consensus on what, if anything,
    the Chinese Room shows. Some people think it shows nothing. Some people
    think it demonstrates that you cannot get semantics from syntax alone, and that
    a machine could not be conscious simply by virtue of running the right program.
    Some (perhaps a minority) agree with Searle that it demonstrates a fundamental
    difference between the real, conscious intentionality that we humans have, and
    merely as-if intentionality. In this case, machines could be conscious only if they
    had the same causal properties as living human brains, whatever those proper-
    ties are.


HOW TO BUILD A CONSCIOUS MACHINE


Many roboticists and computer engineers ignore all the arguments and simply
get on with pursuing their ‘Holy Grail’: ‘the artificial consciousness quest  – nothing
less than the design of an artificial subject’ (Chella and Manzotti, 2007, p. 10).
There are two main ways of setting about the task. The first asks how to build a
machine that seems to be conscious; the second asks how to build a machine that
really is conscious (whatever that means).
But some say there is no need for a grand quest, for conscious artificial machines
are all around us already.

THEY’RE ALREADY CONSCIOUS


In 1979, John McCarthy, one of the founders of AI, claimed that machines as
simple as thermostats can be said to have beliefs. John Searle was quick to
challenge him, asking ‘John, what beliefs does your thermostat have?’ Searle

IS THIS MACHINE
CONSCIOUS?

‘My thermostat has


three beliefs – it’s too


hot in here, it’s too cold


in here, and it’s just right


in here.’


(McCarthy, in Searle, 1984,
p. 30)

Free download pdf