New Scientist - 21.09.2019

(Brent) #1

36 | New Scientist | 21 September 2019


wrong. Researchers debate the exact source
of these mistaken self-descriptions and the
reason we seem to be mentally captive to them.
Engineering, and the science of robotics in
particular, tells us that every good control
device needs a model – a quick sketch – of
the thing it is controlling. We already know
from cognitive neuroscience that the brain
constructs many internal models – bundles
of information that represent items in the
real world. These models are simplified
descriptions, useful but not entirely accurate.
For example, the brain has a model of the
body – called the body schema – to help control
movement of the limbs. When someone loses
an arm, the model of the arm can linger on
in the brain so that people report feeling a
ghostly, phantom limb. But the truth is, all of
us have phantom limbs, because we all have
internal models of our real limbs that merely
become more obvious if the real limb is gone.
By the same engineering logic, the brain
needs to model many aspects of itself to be
able to monitor and control itself. It needs
a kind of phantom brain. One part of this
self-model may be particularly important
for consciousness. Here’s why. Too much
information flows through the brain at any
moment for it all to be processed in equal
depth. To handle that problem, the system
evolved a way to focus its resources and shift
that focus strategically from object to object:
from a nearby object to a distant sound, or
to an internal event such as an emotion or
memory. Attention is the main way the brain
seizes on information and processes it deeply.
To control its roving attention, the brain needs
a model, which I call the attention schema.

Ghostly essence
Our attention schema theory explains why
people think there is a hard problem of
consciousness at all. Efficiency requires the
quickest and dirtiest model possible, so the
attention schema leaves aside all the little
details of signals and neurons and synapses.
Instead, the brain describes a simplified version
of itself, then reports this as a ghostly, non-
physical essence, a magical ability to mentally
possess items. Introspection – or cognition
accessing internal information – can never
return any other answer. It is like a machine
stuck in a logic loop. The attention schema is
like a self-reflecting mirror: it is the brain’s
representation of how the brain represents
things, and is a specific example of higher-
order thought. In this account, consciousness
isn’t so much an illusion as a self-caricature.

A major advantage of this idea is that
it gives a simple reason, straight from
control engineering, for why the trait of
consciousness would evolve in the first place.
Without the ability to monitor and regulate
your attention, you would be unable to
control your actions in the world. That
makes the attention schema essential
for survival. Consciousness, in this view,
isn’t just smoke and mirrors, but a crucial
piece of the engine. It probably co-evolved
with the ability to focus attention, just as
the arm schema co-evolved with the arm.
In which case, it would have originated as
early as half a billion years ago.

Understanding humans by building robots
Hear Tony Prescott on brainy robotics at New Scientist Live
newscientistlive.com

SOUTIRIS BOUGAS/EYEEM/GETTY

“ In this account,


consciousness


isn’t so much


an illusion as


a self-caricature”

Free download pdf