The second in an
entirely new series,
Essential Guide:
Artificial Intelligence
is available now
Get it in all good retailers, or get 10% off*
and the Essential Guide delivered to your door
by buying it at shop.newscientist.com
*10% discount only available from shop.newscientist.com.
Enter code ESS2 at checkout. Only one code to be used
per order. Offer closes 12th August.
GET
10% OFF*
QUOTE CODE
ESS2
TOBY WALSH is professor of artificial intelligence at the
University of New South Wales in Sydney, Australia, and
author of books including Machines that Think: The Future
of Artificial Intelligence and 2062: The World that AI Made
making. Even a super-human intelligence is not going
to be any better than you at predicting the result of the
next EuroMillions lottery.
The “computational complexity” argument
Finally, computer science already has a well-developed
theory of how difficult it is to solve different problems.
There are many computational problems for which
even exponential improvements are not enough to
help us solve them practically. A computer cannot
analyse some code and know for sure whether it will ever
stop – the “halting problem”. Alan Turing, the father of
both computing and AI, famously proved that such a
problem is not computable in general, no matter how
fast or smart we make the computer analysing the code.
Switching to other types of device like quantum
computers will help. But these will only offer exponential
improvements over classical computers, which is not
enough to solve problems like Turing’s halting problem.
There are hypothetical hypercomputers that might break
through such computational barriers. However, whether
such devices could exist remains controversial.
TWO FUTURES
So there are many reasons why we might never
witness a technological singularity. But even without
an intelligence explosion, we could end up with
machines that exhibit super-human intelligence.
We might just have to program much of this painfully
ourselves. If this is the case, the impact of AI on our
economy, and on our society, may happen less quickly
than people like Hawking fear. Nevertheless, we
should start planning for that impact.
Even without a technological singularity, AI is likely to
have a large impact on the nature of work. Many jobs, like
taxi and truck driver, are likely to disappear in the next
decade or two. This will further increase the inequalities
we see in society today. And even quite limited AI is
likely to have a large influence on the nature of war.
Robots will industrialise warfare, lowering the barriers
to war and destabilising the current world order. They
will be used by terrorists and rogue nations against us.
If we don’t want to end up with Terminator, we had
better ban robots in the battlefield soon. If we get it
right, AI will help make us all healthier, wealthier and
happier. If we get it wrong, AI may well be one of the
worst mistakes we ever get to make. ❚