New Scientist Australian Edition - 24.08.2019

(Jacob Rumans) #1
24 August 2019 | New Scientist | 41

then went on to invent advanced chess, also
known as centaur chess, in which humans
and computers team up and play in pairs.
As expected, amateur players with
computers can beat grandmasters playing
alone. But even when both use computers,
amateur players can beat grandmasters.
When this happens, it is typically because
the amateur is a better team player than
the expert, who is more likely to disregard
the machine’s suggestions.
Of course, there are situations where we
should expect human judgement to beat an
AI’s. Matters of aesthetics or ethics are the most
obvious candidates. Take generative design,
where AIs are already being used to create
thousands of blueprints for potential parts of
aircraft and cars. The program whittles down
many potential options to a few strong
candidates that a human then chooses from.
There are good reasons to keep humans in the
loop with autonomous weapons systems, too.
If we can work out when an AI should ask a
human for help, the combined thinking could
be far more powerful than that of the AI alone.
For Sandberg, the trick would be to have an AI
that does a lot of our thinking for us but which
only makes decisions that we are comfortable
with. Owain Evans, a colleague of Sandberg’s at
the University of Oxford, is trying to teach an
AI about human values. By asking questions
about potential decisions, the system is
learning what it should do in different moral
situations. AIs might check in with our values
every now and then. Would you disapprove
of this action? Would this outcome be OK?
“It might be smarter than me, but it’s still only
doing things that I would do,” says Sandberg.
Stanley agrees about the value of human
input. He says he was blown away by
experiments he conducted with a neural
network evolved to guide a robot through
a maze. With occasional human input, the
AI got much better. It makes sense that we
should begin to solve problems in tandem
with machines. We can’t beat them at
individual tasks, so let’s join them.
“If we play things right, we are going
to broaden the way we can think about
problems,” says Sandberg. “We know that
problem-solving is helped by having different
perspectives. Soon, we could have perspectives
that are different from any we’ve had before.”  ❚

improve themselves using techniques inspired
by nature. The basic way of using evolutionary
techniques in computing is to start with
random solutions, select the best ones, mix
them together and try again. Repeat this
millions of times and the system will converge
on a successful AI design on its own.
Assuming such techniques work and we
can build ever better AIs, the most promising
possibility is that they will become our
collaborators. “It is inevitable that humans
will be interacting with AI to accomplish tasks,”
says Devi Parikh at the Georgia Institute of
Technology in Atlanta.

My other brain is a computer
For successful collaboration, we require
trust, which for Parikh means we need to
develop an AI theory of mind. In humans
and some other animals, a theory of mind
lets individuals ascribe mental states to others.
It lets someone see another’s point of view,
their beliefs and intentions.
The more aware people are of what is
going on in a colleague’s head, the more
effective they are at working together
and adapting to each other’s strengths and
weaknesses, says Parikh. Why shouldn’t the
same apply to teams made up of humans
and AIs? (See “Alien thinking”, left).
Part of this will come from making AIs
more interpretable and transparent. But the
most fruitful collaborations may arise from
partnering up and thinking alongside them.
Everyone knows that Kasparov lost to Deep
Blue. What is less well known is that Kasparov

Douglas Heaven is a writer
based in London, specialising
in technology and AI. Follow
him on Twitter @strwbilly

think anybody really thinks they’re inches
away from the human level,” says Ken Stanley
at the University of Central Florida, who is
the founder of the AI lab at Uber. Although
AlphaZero used the same algorithm to teach
itself to play Go and chess, its chess-playing
neural net can’t play Go and its Go-playing one
can’t play chess. AlphaZero isn’t able to apply
lessons learned in one game to another.
For deep learning to reach the next level
on the long climb to human-like intelligence,
neural networks need to become generalists.
These may involve wholly new kinds of
interconnectivity or new rules for activating
the software neurons of the network. “We don’t
just want complexity for its own sake,” says
Stanley. “We want it because it produces things
that are amazing.”
Stanley hopes to reach generalisation
through neuroevolution: neural networks that

Above: DeepMind’s
AlphaStar AI learned
from thousands of
games of StarCraft II

Left: Chess-playing
AIs are strong
enough to develop
new strategies

DEEPMIND

Free download pdf