New Scientist - 21.09.2019

(Brent) #1
21 September 2019 | New Scientist | 37

that of an orange, you translate the speech
into taste information and compare the two
remembered tastes, then translate back into
words to give your answer. This easy back-and-
forth conversion between speech and many
other information domains is challenging to
do artificially. Our conscious machine would
need to correlate information across every
imaginable domain, a problem that hasn’t
yet been solved in artificial intelligence.
Given all the promise and all the difficulties,
just how close are we to conscious machines?
If the attention schema approach is correct,
the first attempts at visual consciousness
could be built with existing technology. But
it will take a lot longer to give machines a
human-like stream of consciousness. It will
take time to build a conscious machine capable
of seeing, hearing, tasting, touching, thinking
abstract thoughts and feeling emotions, with
a single integrated focus of attention to
coordinate within and between all those
domains, and able to talk about that full range
of content. But I believe it will happen.
To me, though, the purpose of this thought
experiment isn’t to advocate for conscious
robots. The point is that consciousness itself
can be understood. It isn’t an ethereal essence
or an inexplicable mystery. The attention
schema theory puts it in context and gives
it a concrete role in adaptation and survival.
Instead of an ill-defined epiphenomenon, a
fog extruded by the brain and floating between
the ears, consciousness becomes a crucial
component of the cognitive machine. ❚

problem becomes really tricky. Little is known
about the information content in the brain
that lies behind abstract thought and emotion,
or how they intersect with the mechanisms of
attention. Sorting out how to build a machine
with that content could take decades.

Talking my language
The final component our conscious machine
requires is a talking search engine. Strictly
speaking, talking isn’t necessary for
consciousness, but for most people the goal of
artificial consciousness is a machine that has
a human-like ability to speak and understand.
We want to have a good conversation with it.
The problem is deceptively hard. We already
have digital assistants like Siri and Alexa but
these are limited in their functions. You give
them words, they search for words on the
internet, and they then give you back more
words. If you ask for the nearest restaurant,
the digital assistant doesn’t know what a
restaurant is, other than as a statistical
clustering of words. In contrast, the human
brain can translate speech into non-verbal
information and back again. If someone asks
you how the taste of a lemon compares with

Michael Graziano is a professor
of psychology and neuroscience
at Princeton University and author
of Rethinking Consciousness

MANANA KVERNADZE/EYEEM/GETTY

“ To engineer


human-like


consciousness


into a machine


would require four


ingredients”


Sometimes, the best way to understand
a thing is to try to build it. According to this
new idea we should be able to engineer
human-like consciousness into a machine.
It would require just four ingredients:
artificial attention, a model of that attention,
the right range of content (information
about things like senses and emotions) and
a sophisticated search engine to access the
internal models and talk about them.
The first component, attention, is one
of the most basic processes in most nervous
systems. It is nicely described by the global
workspace theory. If you look at an object
such as an apple, the brain signals related to the 
apple may grow in strength and consistency.
With sufficient attentional enhancement,
these signals can reach a threshold where
they achieve “ignition” and enter the global
workspace. The visual information about the
apple becomes available for systems around
the brain, such as speech systems that allow
you to talk about the apple, motor systems that
allow you to reach for it, cognitive systems that
allow you to make high-level decisions about
it, and memory systems that allow you to store
that moment for possible later use.
Scientists have already built artificial
versions of attention, including at least
a simple version of the global workspace.
But these machines show no indication
of consciousness.
The second component that our conscious
machine requires is an attention schema, the
crucial internal model that describes attention
in a general way, and in so doing informs the
machine about consciousness. It depicts
attention as an invisible property, a mind that
can experience or take possession of items,
something that in itself has no physical
substance but still lurks privately inside an
agent. Build that kind of attention schema,
and you will have a machine that claims to be
conscious in the same ways that people do.
The third component our machine needs
is the vast stream of material that we associate
with consciousness. Ironically, the hard
problem – getting the machine to be conscious
at all – may be the easy part, and giving the
machine the range of material of which to be
conscious may be the hard part. Efforts to build
conscious content might begin with sensory
input, especially vision, because so much is
known about how sensory systems work in the
brain and how they interact with attention. But
a rich sensory consciousness on its own won’t
be enough. Our machine should also be able to
incorporate internal items such as abstract
thought and emotion. Here the engineering


The big challenge
will be giving a
robot human-like
sensory and
emotional input
Free download pdf