CH
RI
S^ S
EA
RL
E
H
OWEVER you look at it, the future appears
bleak. The world is under immense stress
environmentally, economically and
politically. It’s hard to know what to fear the
most. Even our own existence is no longer
certain. Threats loom from many possible
directions: a giant asteroid strike, global warming, a
new plague, or nanomachines going rogue and turning
everything into grey goo.
Another threat is artificial intelligence. In December
2014, Stephen Hawking told the BBC that “the
development of full artificial intelligence could spell
the end of the human race... It would take off on its
own, and redesign itself at an ever increasing rate.
Humans, who are limited by slow biological evolution,
couldn’t compete, and would be superseded.” In 2016,
he followed that up by saying that AI is likely “either the
best or worst thing ever to happen to humanity”. Other
prominent people, including Elon Musk, Bill Gates and
Steve Wozniak, have made similar predictions about
the risk AI poses to humanity.
Hawking’s fears revolve around the idea of the
technological “singularity”. This is the point in time at
which machine intelligence starts to take off, and a new,
more intelligent species starts to inhabit Earth. We can
trace the idea of the technological singularity back to a
number of different thinkers, including John von
Neumann, one of the founders of computing, and the
science fiction author Vernor Vinge.
The idea is roughly the same age as research into AI
itself. In 1958, mathematician Stanisław Ulam wrote a
tribute to the recently deceased von Neumann, in which
he recalled: “One conversation centered on the ever
accelerating progress of technology and changes in the
mode of human life, which gives the appearance of
approaching some essential singularity... beyond which
human affairs, as we know them, could not continue”.
More recently, the idea of a technological singularity >
Could superintelligences of our creation in some way turn against us or supersede us on
Earth? That idea is a staple both of science fiction and discussions surrounding how artificial
intelligence will evolve. In this classic New Scientist article, reproduced in our new Essential
Guide: Artificial Intelligence, AI researcher Toby Walsh argues that while machines may well in
the long run outsmart us, there are any number of reasons to believe they’ll never usurp us.
has been popularised by Ray Kurzweil, who predicts it
will happen around 2045, and Nick Bostrom, who has
written a bestseller on the consequences.
There are several reasons to be fearful of machines
overtaking us in intelligence. Humans have become
the dominant species on the planet largely because we
are so intelligent. Many animals are bigger, faster or
stronger than us. But we used our intelligence to
invent tools, agriculture and amazing technologies like
steam engines, electric motors and smartphones.
These have transformed our lives and allowed us to
dominate the planet.
It is therefore not surprising that machines that
think – and might even think better than us – threaten
to usurp us. Just as elephants, dolphins and pandas
depend on our goodwill for their continued existence,
our fate in turn may depend on the decisions of these
superior thinking machines.
The idea of an intelligence explosion, when
machines recursively improve their intelligence and
thus quickly exceed human intelligence, is not a
particularly wild idea. The field of computing has
profited considerably from many similar exponential
trends. Moore’s law predicted that the number of
transistors on an integrated circuit would double every
two years, and it has pretty much done so for decades.
So it is not unreasonable to suppose AI will also
experience exponential growth.
Like many of my colleagues working in AI, I predict
we are just 30 or 40 years away from AI achieving
superhuman intelligence. But there are several strong
reasons why a technological singularity is improbable.
The “fast-thinking dog” argument
Silicon has a significant speed advantage over our
brain’s wetware, and this advantage doubles every two
years or so according to Moore’s law. But speed alone
does not bring increased intelligence. Even if I can
Essential Guide Extract