Consciousness

(Tuis.) #1

  • seCtIon FoUR: eVoLUtIon
    is one that accounts for why we have this particular illusion. This means that a
    machine would have human-like consciousness only if it were subject to the same
    kind of illusion. The task is then to understand how the illusion comes about and
    design a similarly deluded machine.
    One possible example is Sloman and Chrisley’s (2003) CogAff architecture, devel-
    oped as a framework for thinking about both natural and artificial information
    processing systems, and based on the implicit theory that minds are informa-
    tion-processing virtual machines. They propose ‘virtual machine functionalism’
    (VMF), which avoids some of the problems of other forms of functionalism by
    including internal processes of the virtual machine that do not have to be closely
    linked to its input–output relations.
    The CogAff architecture can be structured in various ways, for example having a
    ‘multi-window’ perception and action system as opposed to restricting the routes
    through it to give ‘peep-hole’ perception and action. Or it can use subsumption
    architecture that includes a deliberative reasoning (what-if ) layer as well as the
    reactive layer. Critically, it also has a meta-management layer that allows it to
    attend to aspects of its own internal functioning.
    But what about the ‘qualia – the private, ineffable way things seem to us’? Sloman
    and Chrisley want to ‘explain qualia by providing an explanation of the phenom-
    ena that generate philosophical thinking of the sort found in discussions of qualia’
    (2003, p. 165). Their virtual machine includes processes that classify its own inter-
    nal states. Unlike words that describe common experiences (such as seeing ‘red’
    in the world), these refer to internal states or concepts that are not strictly com-
    parable from one virtual machine to another – just like qualia. If people protest
    that there is ‘something missing’ – the indefinable quality, the what it’s like to be,
    or what zombies lack – their reply is that the fact that people think this way is part
    of what needs to be explained, and their approach can do just that.


Extending this line of thinking, another obvious contributor to illusions is lan-
guage. For example, the self has been described as a con-
struct of language, a ‘center of narrative gravity’, a ‘benign
user illusion’ that emerges in creatures who use language, or
a ‘selfplex’ constructed by and for the replication of memes
(Chapters  5 and 11). The implication here is that if any
machine – or indeed any non-human animal – were capable
of using language, and capable of talking about ‘I’, ‘me’, and
‘mine’, it would also fall for the illusion that it was an experi-
encing self, and would then be conscious like us.

SPEAKING MACHINES
Charles Darwin’s grandfather, Erasmus Darwin, built a speak-
ing machine around 1770 that could (just about) say ‘Mama’
and ‘Papa’. An entrepreneur offered him £1,000 if he could
get it to recite the Lord’s Prayer and the Ten Commandments,
but his money was safe. Since then we have enjoyed fantastic
speaking machines that play recorded speech, read aloud from printed text, or
turn spoken language into print. Then there are computers that will tell you, in
a perfectly comprehensible, if annoying, voice, that they think you have made a

Perception ProcessingCentral

Meta-management
(reflective processes)
(newest)

Deliberative reasoning
(‘what if’ mechanisms)
(older)

Reactive mechanisms
(oldest)

Action

FIGURE 12.15 • The CogAff schema:
superimposing towers and
layers (Sloman and Chrisley,
2003, p. 163).

Free download pdf