New Scientist - USA (2020-07-04)

(Antfer) #1

make my dog think faster, it is still unlikely to play chess.


It doesn’t have the necessary mental constructs, the


language and the abstractions. Steven Pinker put this


argument eloquently: “Sheer processing power is not a


pixie dust that magically solves all your problems.”


Intelligence is much more than thinking faster or

longer about a problem than someone else. Of course,


Moore’s law has helped AI. We now learn faster, and off


bigger data sets. Speedier computers will certainly help


us to build artificial intelligence. But, at least for


humans, intelligence depends on many other things


including years of experience and training. It is not at


all clear that we can short circuit this in silicon simply


by increasing the clock speed or adding more memory.


The anthropocentric argument


The singularity supposes human intelligence is some


special point to pass, some sort of tipping point.


Bostrom writes: “Human-level artificial intelligence


leads quickly to greater-than-human-level artificial


intelligence... The interval during which the machines


and humans are roughly matched will likely be brief.


Shortly thereafter, humans will be unable to compete


intellectually with artificial minds.”


If there is one thing that we should have learned from

the history of science, it is that we are not as special as


we would like to believe. Copernicus taught us that the


universe does not revolve around Earth. Darwin


showed us that we are not so different from other apes.


Watson, Crick and Franklin revealed that the same


DNA code of life powers us and the simplest amoeba.


And artificial intelligence will no doubt teach us that


human intelligence is itself nothing special. There is


no reason to suppose that human intelligence is a


tipping point, that once passed allows for rapid


increases in intelligence.


Of course, human intelligence is a special point

because we are, as far as we know, unique in being able


to build artefacts that amplify our intellectual abilities.


We are the only creatures on the planet with sufficient


intelligence to design new intelligence, and this new


intelligence will not be limited by the slow process of


human reproduction and evolution. But that does not


bring us to the tipping point, the point of recursive self-


improvement. We have no reason to suppose that


human intelligence is enough to design an artificial


intelligence that is sufficiently intelligent to be the


starting point for a technological singularity.


Even if we have enough intelligence to design

super-human artificial intelligence, the result may


not be adequate to precipitate a technological


singularity. Improving intelligence is far harder than


just being intelligent.


CH

RI
S^ S

EA

RL

E

The “diminishing returns” argument
The idea of a technological singularity supposes that
improvements to intelligence will be by a relative
constant multiplier, each generation getting some
fraction better than the last. However, the performance
of most of our AI systems has so far been that of
diminishing returns. There are often lots of low-
hanging fruit at the start, but we then run into
difficulties when looking for improvements. This helps
explain the overly optimistic claims made by many of
the early AI researchers. An AI system may be able to
improve itself an infinite number of times, but the
extent to which its intelligence changes overall could
be bounded. For instance, if each generation only
improves by half the last change, then the system will
never get beyond doubling its overall intelligence.

The “limits of intelligence” argument
There are many fundamental limits within the
universe. Some are physical: you cannot accelerate past
the speed of light, know both position and momentum
with complete accuracy, or know when a radioactive
atom will decay. Any thinking machine that we build
will be limited by these physical laws. Of course, if that
machine is electronic or even quantum in nature, these
limits are likely to be beyond the biological and
chemical limits of our human brains. Nevertheless,
AI may well run into some fundamental limits. Some
of these may be due to the inherent uncertainty of
nature. No matter how hard we think about a problem,
there may be limits to the quality of our decision-
Free download pdf