19 February 2022 | New Scientist | 41
Geoffrey Hinton, Yoshua Bengio and Yann
LeCun – made it clear they think system 2
capabilities should be learned by neural
networks, not built by hand.
The argument, says Richards, is that humans
aren’t smart enough to build symbol systems
that capture the complexity of the real world.
The focus therefore should be on working
out how to encourage a network to develop in
ways that mimic the brain’s development of
high-level cognitive abilities. “We are not smart
enough to hand-engineer this stuff,” he says.
“And you don’t have to be. You can just let the
neural network discover the solution.”
We still don’t know how to steer one to do so,
though. Brenden Lake at New York University
and Meta AI Research says a promising
approach is to build symbolic models that
replicate aspects of human intelligence and
then try to replace as many components as
possible with data-driven machine learning.
“You can take symbolic models that have been
really successful and then see what are the
minimal, critical symbolic pieces that you
need in order to explain its abilities,” he says.
Ultimately, there are probably benefits to
both the top-down and bottom-up approaches,
says Konrad Kording at the University of
Pennsylvania. Studying human behaviour
can give us clues about the abstract cognitive
processes we need to replicate in thinking
machines, he says, while fundamental
neuroscience can tell us about the building
blocks required to build them efficiently.
But perhaps the biggest contribution
either approach can make to AI is cultural,
says Kording. AI research today is driven by
benchmark challenges and competitions,
which promote an incrementalist approach.
Most advances are achieved by simply
tweaking the previous state-of-the-art
model or training it on more data or on
ever bigger computers.
Those who study human intelligence
bring a different perspective to the field.
“They’re driven by a will to understand
instead of a will to compete,” says Kording.
In the long run, that attitude may prove
more valuable than any details about how
our brains and minds work. ❚
engineers encode possible configurations
of pieces, the moves each can make and rules
about which moves will help win the game.
Chess is one thing. Unpicking all the
variables and relationships that govern most
real-world problems is a different matter.
That is why symbolic AI fell out of favour
in the late 1980s, setting the stage for the
rise of data-driven deep learning. And yet
it turns out that many of symbolic AI’s
strengths overlap with the weaknesses we
have discovered in deep learning. Now, there
is growing interest in combining the two.
Hybrid intelligence
So-called neuro-symbolic systems attempt
to retain deep learning’s ability to learn from
new experiences, while introducing symbolic
AI’s ability to do complex reasoning and draw
on pre-existing knowledge. “There must be
some way of bringing the insights from these
two traditions together,” says Marcus.
One possibility was outlined at a conference
in January 2021 by IBM’s Francesca Rossi and
her colleagues. Their proposal builds on the
idea outlined by Daniel Kahneman in his
best-selling book Thinking, Fast and Slow,
which splits the human mind into two broad
modes of thought. System 1 is fast, automatic
and intuitive, and responsible for rapidly
making sense of the world around us. System 2
is slow, analytical and logical, and controls our
ability to reason through complex problems.
The group combined this idea with AI
pioneer Marvin Minsky’s “society of mind”
theory, which postulates that the mind
consists of many specialised cognitive
processes that interact to create a coherent
whole. The result is a conceptual system
made up of multiple components specialised
for different system 1 and system 2 tasks.
As in the human mind, system 1 agents kick
in automatically as soon as the AI is set a task.
But an overarching “metacognitive” module
then assesses their solutions, and if they don’t
work, it pulls in a more deliberative system 2
agent. It doesn’t necessarily matter which
technology is used for individual components,
says Rossi, but in their early experiments,
the system 1 agents are often data-driven,
while system 2 agents and the metacognitive
module rely on symbolic approaches.
There is considerable resistance to the
revival of symbolic approaches. In a recent
paper, the three pioneers of deep learning –
DR
TO
RS
TE
N^ W
ITT
MA
NN
/SC
IEN
CE
PH
OT
O^ L
IBR
AR
Y
haphazard evolutionary experiments. “The
brain is actually really flawed,” he says. What
we want is to emulate what it does, regardless
of how it is put together. “In some ways,
psychology might be more useful for that.”
Psychology has some clear and well-
validated models of the cognitive processes
behind intelligence. Take the principle
of compositionality, the idea that we
understand things in terms of their parts
and the relationships between those parts.
This underpins reasoning in humans,
says Marcus, but has proven difficult to
implement in artificial neural networks.
There are ways to implement such
principles in machines. The basic idea, known
as symbolic AI, was the dominant approach
to AI in the second half of the 20th century.
It builds on cognitive theories describing
how humans think by manipulating symbols.
We use the word “dog” to refer to a real-world
animal, for instance, and we know that the
+ sign means add two values together.
For engineers, creating symbolic AIs
involves generating structured ways to
represent real-world concepts and their
relationships as well as rules about how
a computer can process this information
to solve problems. With chess, for instance,
Researchers
are trying
to copy the
complexity
of real brain
cells in AIs
Edd Gent is a freelance journalist
based in Bangalore, India