2_5256034058898507033

(Kiana) #1

Kenneth Cukier


194 죞¥Ÿ³¤ ¬μ쬟ž™


might be seen as minor annoyances, like
ants at a picnic,” writes W. Daniel Hillis,
a computer scientist, in his contribution
to Possible Minds. “Our most complex
machines, like the Internet, have already
grown beyond the detailed understanding
o” a single human, and their emergent
behaviors may be well beyond our ken.”
The trouble comes in how to specify
such a system’s goal, or what engineers
call its “value alignment.” The fear is
not necessarily that ¬Ÿ will become
conscious and want to destroy people
but that the system might misinterpret
its instructions.
Russell has dubbed this “the King
Midas problem,” from the ancient Greek
myth about the king who received his
wish to turn everything he touched into
gold—only to realize that he couldn’t eat
or drink gold. The canonical illustration
o” this in the literature is an ¬³Ÿ system
that is able to perform almost any task
that is asked o” it. I” a human asks it to
make paper clips and fails to specify how
many, the system—not understanding
that humans value nearly anything more
than paper clips—will turn all o” earth
into a paper clip factory, before coloniz-
ing other planets to mine ore for still
more paper clips. (This is dierent from
the threat o” narrow ¬Ÿ run amok; unlike
¬³Ÿ, a narrow ¬Ÿ system programmed to
produce paper clips would not be
capable o” doing anything more than
that, so intergalactic stationary products
is out.) It’s a ludicrous example, but one
that’s bandied about seriously.

MAKING AI SAFE FOR HUMANS
On the other side o” the debate are
critics who dismiss such fears and argue
that the dangers are minimal, at least
for now. Despite all the optimism and

advances in computer processing and an
explosion o” data.
The success o” deep learning has
revived Wiener’s fears o” computer
monsters running amok, and the biggest
debates in ¬Ÿ today revolve around safety.
The Microsoft founder Bill Gates and
the late cosmologist Stephen Hawking
famously fretted about it. At a confer-
ence in 2014, the technology entrepreneur
Elon Musk described ¬Ÿ as “summoning
the demon.” Others, such as the ¬Ÿ
researchers Stuart Russell and Max
Tegmark, along with the engineer Jaan
Tallinn, believe that ¬Ÿ represents a
serious threat to humanity that requires
immediate attention.
Broadly speaking, there are two
types o” ¬Ÿ. The Ãrst is artiÃcial general
intelligence, known as ¬³Ÿ: systems
that can think, plan, and respond like a
human and also possess “superintelli-
gence.” An ¬³Ÿ system would know much
o” the information that exists, be able
to process it at lightning speed, and
never forget any o” it. Imagine Google
with a mind (and maybe a will) o” its
own. The second form o” ¬Ÿ is narrow
¬Ÿ: systems that do discrete tasks very
well, such as self-driving cars, voice
recognition technology, and software
that can make medical diagnoses using
advanced imaging. The fear about ¬³Ÿ
is that it may evolve on its own, outside
oÊ human control. The worry about
narrow ¬Ÿ is that its human designers will
fail to perfectly specify their intent,
with catastrophic consequences.
No consensus exists among experts
about whether ¬³Ÿ is even possible. But
those who believe that it is worry that i”
an ¬³Ÿ system did not share human values
(and there is no inherent reason why it
would), it might cause trouble. “Humans

Free download pdf