38 | New Scientist | 19 February 2022
Features Cover story
sense to reason about unfamiliar situations.
One reason might be that the similarities
between real brains and AIs are only skin deep.
One disparity that has recently come to the
fore is in the processing power of artificial
neurons. The “point neurons” used in artificial
neural networks are a shadow of their
biological counterparts, doing little more than
totting up inputs to work out what their output
should be. “It’s a vast simplification,” says
Yiota Poirazi, a computational neuroscientist
at the Institute of Molecular Biology and
Biotechnology in Greece. “In the brain, an
individual neuron is much more complicated.”
There is evidence that neurons in the
cortex – the brain region associated with high-
level cognitive functions like decision-making,
language and memory – carry out complex
computations all by themselves. The secret
appears to lie in dendrites, the branch-like
structures around a neuron that carry signals
from other neurons to the cell’s main body.
The dendrites are studded with synapses,
the contact points between neurons, which
pepper them with incoming signals.
We had known for some time that dendrites
can modify incoming signals before passing
them on. But in a 2020 study, Poirazi and her
colleagues at Humboldt University of Berlin
found that a single human dendrite can carry
out a computation that takes a neural network
made up of at least two layers of many artificial
neurons to replicate. Moreover, when a group
from the Hebrew University of Jerusalem tried
to train an AI to mimic all the computations
of a single biological neuron, it required an
artificial neural network five to eight layers
deep to reproduce all of its complexity.
Could these insights point the way to more
Making a mind
In the push to make artificial intelligence that thinks like humans,
many researchers are focused on fresh insights from neuroscience.
Should they be looking to psychology instead, asks Edd Gent
A
RTIFICIAL intelligence has come
a long way. In recent years, smart
machines inspired by the human
brain have demonstrated superhuman
abilities in games like chess and Go, proved
uncannily adept at mimicking some of our
language skills and mastered protein folding,
a task too fiendishly difficult even for us.
But with various other aspects of what we
might reasonably call human intelligence –
reasoning, understanding causality, applying
knowledge flexibly, to name a few – AIs still
struggle. They are also woefully inefficient
learners, requiring reams of data where
humans need only a few examples.
Some researchers think all we need to
bridge the chasm is ever larger AIs, while
others want to turn back to nature’s blueprint.
One path is to double down on efforts to copy
the brain, better replicating the intricacies
of real brain cells and the ways their activity
is choreographed. But the brain is the most
complex object in the known universe and it is
far from clear how much of its complexity we
need to replicate to reproduce its capabilities.
That’s why some believe more abstract
ideas about how intelligence works can
provide shortcuts. Their claim is that to
really accelerate the progress of AI towards
something that we can justifiably say thinks
like a human, we need to emulate not the
brain – but the mind.
“In some sense, they’re just different ways
of looking at the same thing, but sometimes
it’s profitable to do that,” says Gary Marcus
at New York University and start-up Robust AI.
“You don’t want a replica, what you want is
to learn the principles that allow the brain
to be as effective as it is.”
Whether the mind and the brain can even
be thought of as separate is controversial,
and neither philosophers nor scientists
can pinpoint where one might draw the line.
But exactly what point on that spectrum
AI researchers should be focused on for
inspiration is currently a big debate in the field.
There can be no doubt that the brain has
been a handy crib sheet. The artificial neural
networks powering today’s leading AIs, such
as the impressive language model GPT-3,
consist of highly interconnected webs of
simple computational units analogous
to biological neurons. Like the brain, the
behaviour of the network is governed by the
strength of its connections, which are adjusted
as the AI learns from experience.
This simple principle has proved incredibly
powerful and today’s AIs can learn to spot
cancer in X-rays, navigate flying drones or
produce compelling prose. But they require
mountains of data and most struggle to apply
their skills outside highly specific niches.
They lack the flexible intelligence that allows
humans to learn from a single example, adapt
experiences to new contexts or use common >
“ AIs can struggle
to apply their
skills outside
highly specific
niches”