New Scientist - USA (2019-06-08)

(Antfer) #1
8 June 2019 | New Scientist | 45

So I am very optimistic on that side, but I also
realise that it might take decades, or even
centuries, for all we know. But we will get there,
unless we self-destruct in some way as a social
organisation.

Is an AI apocalypse something people should
be afraid of?
Well I’m not.

Why not?
Because that scenario just doesn’t fit my
understanding of the science of AI right now.
I don’t see it as credible. Now, I don’t have
a crystal ball, and the science of AI 50 years
from now will be very different. I think we
need to be prudent, and it is good that there
are people who are thinking about these issues,
but making it a political or social question at
this point is very premature. We should be
worrying about those other, shorter term
issues that, for sure, are happening and need
our attention.

Are the short-term issues unstoppable, or is
there a way to halt them?
There is a way, but it isn’t an easy one. We need
to have society at large understand those
issues and bring them to the forefront of the
political agenda, so that governments act as
they should.

Finally, what do people get wrong about AI?
So many people overestimate the intelligence
of these systems. AIs are really dumb. They
don’t understand the world. They don’t
understand humans. They don’t even have
the intelligence of a 6-month-old. Yes, they
can beat the world champion at Go. But that
doesn’t mean they can do anything else.
A related misconception is that intelligent
robots are taking over the world. People project
their own emotions and feelings onto the
machines that we will build in the future and
think, “well, if I was that machine, I would be
angry that all these guys have been enslaving
me, and now I’m out for revenge”.
But I think this is nonsense. We are
designing those machines, which means
the real danger is if an AI gets into the wrong
hands, and is then used in ways that will hurt
us. It isn’t that the AI is malevolent, it is the
humans that are stupid and/or greedy. ❚

AI to help predict the consequences of societal
transformations. And we have been very active
in looking at how machine learning can help
promote human rights, by detecting gender
bias, for example.

Does AI have any potential risks?
How much time do I have? Lethal autonomous
weapons, for example, are morally wrong
because computers don’t understand moral
values. That means they might not be able
to question the legitimacy of an order. They
are also wrong for security reasons, as they
can threaten the global balance of power.
That could be dangerous for all of us.
AI can also be used to control people,
monitoring where they are and reinforcing
the power of authoritarian governments in a
way that wasn’t possible before. For example,
we know that China has hundreds of millions
of cameras in the streets, equipped with the
technology for facial recognition.
Another related danger that people talk
about less is how AI could be used to influence
others. Think about advertising, the influence
of which is usually seen as benign, and extend
those techniques to politics. It might not
matter if I buy Coke or Pepsi, but it does matter
if I vote for Trump or not, right?
Then you have the issue of bias and diversity,
where AI can reinforce some of the negative
aspects of our current societies. And finally,
there is the concentration of power. As a
powerful technology left to its own devices,
it is just going to reinforce the power of those
who control it, and that is bad for democracy
as well.

Do you think the big technology companies,
like Google or Facebook, have too much power?
Yes. I think the trend is concerning. There is a
snowball effect. The more data you have, the
more customers you have. And if you are rich,
you can pay for the best researchers. This is bad
for innovation and it’s bad for democracy. It is
bad for innovation, because innovation comes
from diversity, from many different people
with different goals trying different things.
And then for democracy... well, democracy,
what does it mean? It is power to the people.
If power is concentrated in a few hands,
that isn’t good.

How do you feel about the future of AI?
I am very optimistic about the science making
a lot of progress in the coming decades. I think
brains are very complicated machines, but
I think we will figure out the principles of
intelligence, which will help us make better AI.

“ Brains are very


complicated


machines, but


I think we will


figure out the


principle of


intelligence”


Timothy Revell is assistant
news editor at New Scientist.
Follow him on Twitter
@timothyrevell
Free download pdf