Consciousness

(Tuis.) #1

Chapter


Twelve


The evolution of machines


other systems really work. For example, consider the wall-following robots whose
useful behaviour emerged from a couple of sensors and some inherent bias. Or
consider the equally simple robots that can gather pucks into heaps. They roam
around with a shovel-like collector on the front which either scoops up any pucks
they bump into or drops them when it has too many. In consequence, after some
time, the pucks are all collected into piles. Observers readily assume that the
robots are ‘trying’ to gather up the pucks. In reality, the robots have no goals, no
plans, no knowledge of when they have succeeded, and no internal representa-
tions of anything at all.


This should remind us that our attributions of intentionality are not to be trusted.
A strong impression that a given machine is trying to achieve a goal is no guar-
antee that it is. And perhaps the same logic should apply when thinking about
people as about other machines. As Brooks puts it ‘we, all of us, overanthropo-
morphize humans, who are after all mere machines’ (2002, p. 175).


The intentional stance is the attribution of beliefs to a rational agent, and we
adopt it all the time. We may be less willing to adopt the ‘phenomenal stance’,
by attributing full subjectivity (including consciousness and emotions) to others
(Metzinger, 1995b; Robbins and Jack, 2006). Yet we may feel sorry for cartoon
characters, love and cherish our dolls and teddies and even our cars, and cringe
when we accidentally step on a worm. If asked whether we truly believe that
Mickey Mouse, our favourite dolls, or ants and woodlice have subjective experi-
ences, we may emphatically say ‘no’, and yet still behave towards them as though
they do. In this way, our natural tendencies to treat others as intentional, sociable,
and feeling creatures all confuse the question of artificial consciousness.


This confusion is likely to get deeper as more and more interesting machines
are constructed. Among those already with us are some specifically designed to
elicit social behaviour from the people they meet. One of Cog’s designers (Con-
cept 12.2), Cynthea Breazeal, was once videotaped playing with Cog. She shook
a whiteboard eraser in front of Cog; Cog reached out and touched the eraser;
Cynthia shook it again. It looked to observers as though Cynthia and Cog were
taking turns in a game.


In fact, Cog was not capable of taking turns; that was a skill scheduled for years
further on in its developmental chart. It seemed that Breazeal’s own behaviour
was coaxing more abilities out of Cog than had been put in. This set her think-
ing about how humans interact socially with machines, and to find out more she
built Kismet (Breazeal, 2001), a human-like head with some simple abilities built
in, and one of the first and best-known ‘social robots’. Many people behaved as
though Kismet were alive. They behaved as though Kismet were conscious. More
recently, the ‘human-like social robot’ Brian 2 has been designed to be capable of
emotional body language, using a variety of postures and movements identified
in human interactions (McColl and Nejat, 2014). The robotic head EMYS is part of a
project to create robotic companions for humans. It has a roughly spherical head
made of three moveable discs for conveying basic emotions like anger, disgust,
sadness, and surprise. Thirty-three percent of the 8–12-year-old children surveyed
thought it had emotions, and rated it as having a very positive personality on the
‘big five’ personality factors (Kędzierski et al., 2013). These developments show
how readily people infer consciousness in non-human machines, confirming just
how easy it is to take the intentional stance.


‘We behavioral and
cognitive scientists have
been trained to view
anthropomorphism
as a seductive demon
at which to shake
objectivistic garlic’

(Reber, 2016, p. 3)
Free download pdf