New Scientist - USA (2022-02-19)

(Antfer) #1

40 | New Scientist | 19 February 2022


powerful, flexible AIs? Both Poirazi’s group
and researchers from AI company Numenta
published studies in October 2021 suggesting
that the properties of dendrites could help
tackle one of deep learning’s most debilitating
problems – catastrophic forgetting. This is the
tendency of artificial neural networks to forget
previously learned information when they
learn something new. Using more complex
artificial neurons seems to get around this
by allowing different regions of the network
to specialise at different tasks, says Poirazi.

Flexible thinking
“You have smarter, smaller units, so you
don’t really need the entire network to learn,”
she says. That means previously learned
information in other areas of the network
doesn’t get overwritten. Poirazi suspects this
could also make AIs more flexible. By breaking
problems down into smaller chunks that
are stored in different parts of the network,
it may be easier to recombine them to solve
new challenges that an AI hasn’t seen before.
Not everyone is convinced this is the best
way forward. When Blake Richards at McGill
University in Canada and his colleagues added
dendritic complexity to their neural networks,
they saw no performance gains. Richards has
a hunch that dendrites are simply evolution’s
answer to connecting billions of neurons
within the space and energy constraints
of the brain, which is less of a concern for
AIs running on computers.
For Richards, the key thing we need to tease
out is the “loss function” used by specialised
circuits in biological brains. In AI, a loss
function is the measure that an artificial
neural network uses to assess how well it
performs on a task during training. Essentially,
it is a measure of error. For instance, a language
AI might measure how good it is at predicting
the next word in a sentence.
If we can determine what a particular brain
circuit is striving towards, we could establish
a relevant loss function and use this to train
a neural network to aim for the same goal,
which should, in theory, replicate the brain
function. Richards has tentative evidence
of how this might work. In June 2021, he
and colleagues from McGill University and
AI company DeepMind showed that a single
neural network trained using a loss function
could replicate the two distinct pathways
the visual cortex uses to independently

determine what an object is and where it is.
By repeating this process for the many
specialised networks in the brain, Richards
thinks we could piece together the key
components that make humans such versatile
thinkers. “I suspect it’ll have to be more
modular,” he says. “We’ll want something
that doesn’t look radically different from
the brain in some ways.”
One brain area that could be crucial to
advancing AI is the hippocampus, says
Kimberly Stachenfeld at DeepMind.
Stachenfeld is trying to understand how
neurons in this region help the brain organise
knowledge in a structured way so it can be
reused for new tasks. “It allows us to make
analogies with the past, to reuse information
in new settings and be very dynamic and
flexible and adaptive,” she says.
It is possible to pull more general insights
out of neuroscience to advance AI too, says
Jeff Clune at the University of British Columbia,
Canada, and Californian firm OpenAI. Thinking
about catastrophic forgetting, Clune became
fascinated by the brain’s neuromodulatory
system, in which certain neurons release
chemicals that modulate the activity of other
neurons, often in distant brain regions.
He and his colleagues realised that the
ability to turn learning up or down in separate
parts of their artificial neural network could
help with continual learning, by allowing
different regions to specialise at different
tasks. They didn’t try to build a replica of the
neuromodulatory system. Instead, they
trained one neural network to modulate the
activity of another, switching regions of the
second network on and off so it could learn
a series of tasks without forgetting previous
ones. “We weren’t terribly faithful to the
biology, we just took the abstract idea,” says
Clune. “When you’re trying to do bio-inspired
work, you want to get all, and only, the
ingredients that will really move the needle.”
But some insist we should take abstraction
further, focusing not on replicating the brain’s
nuts and bolts, but the higher-level mental
processes involved in gaining knowledge and
reasoning about the world: a top-down rather
than a bottom-up approach. Gary Marcus is
the standard-bearer for this perspective.
Despite impressive progress, says Marcus,
neuroscience can tell us very little about how
the brain achieves higher-level cognitive
capabilities. More importantly, the brain is an
ad hoc solution, cobbled together by aeons of

“ We want to


emulate what


the brain does.


Psychology


might be


more useful


for that”


New Scientist audio
You can now listen to many articles – look for the
headphones icon in our app newscientist.com/app
Free download pdf