2_5256034058898507033

(Kiana) #1

Kenneth Cukier


196 죞¥Ÿ³¤ ¬μ쬟ž™


prevent mistakes. It would be hard to
know, however, when to make these
extra safety steps obligatory. Surely, the
algorithms that guide a self-driving
car should be regulated in this way. But
what about the ones that determine
which videos a website such as YouTube
will recommend to users? Yes, regula-
tions could oer societal beneÃts—such
as the downgrading o• Flat Earth
Society videos on YouTube—but i” an
algorithm commissar had to approve
every line o” a company’s code, it could
start to feel like overreach.
Missing almost entirely from Possible
Minds is any discussion o” another
dilemma relating to the regulation o” ¬Ÿ:
how to weigh privacy against e–ciency
and accuracy. The more data an ¬Ÿ
system has access to, the better it
performs. But privacy regulations often
discourage the collection and use o”
personal data. Minimizing the quantity
and type o” data that can be used in ¬Ÿ
systems may seem wise in an era when
companies and countries are vacuum-
ing up all the personal data they can
and paying little attention to the risks
o” misuse. But i” regulations winnowed
the amount o” data that was processed,
leading to less accurate performance
for products such as medical diagnos-
tics, society might want to reconsider
the tradeo.

INTO THE UNKNOWN
Another tension in ¬Ÿ, and one that runs
through Possible Minds, is the transpar-
ency and explainability oÊ how ¬Ÿ
systems reach their conclusions. This is
actually a technical concern, not an
epistemological or normative one. That
is to say, the question is not whether
people are clever enough to understand

and bake safety into ¬Ÿ at the outset, rather
than try to sprinkle it on top later.
Russell calls for “provably beneÃcial
¬Ÿ,” a concept that can be applied to
both ¬³Ÿ and narrow ¬Ÿ. Engineers, he
writes, should provide ¬Ÿ systems with a
clear main purpose—for example,
managing a city’s power grid—and also
explicitly program them to be uncertain
about people’s objectives and to possess
the ability to learn more about them
by observing human behavior. In so
doing, the systems would aim to “maxi-
mize human future-life preferences.”
That is, a power-grid ¬Ÿ should Ãnd ways
to lower power consumption instead
of, say, wiping out humans to save on
electricity bills. Thinking in these terms
“isn’t scaremongering,” writes Tegmark.
“It’s safety engineering.”
The cognitive scientist Daniel Dennett
proposes a more creative solution to
the safety conundrum. Why not require
¬Ÿ operators to be licensed, just as
pharmacists and civil engineers are?
“With pressure from insurance compa-
nies and other underwriters,” he writes,
regulators could “oblige creators o” ¬Ÿ
systems to go to extraordinary lengths
to search for and reveal weaknesses and
gaps in their products, and to train
those entitled to operate them.” He
cleverly suggests an “inverted” version
o” the Turing test. Instead o” evaluating
a machine’s ability to imitate human
behavior, as the test normally does,
Dennett’s version would put the human
judge on trial: until a person who is
highly trained in ¬Ÿ can spot the Áaws
in a system, it can’t be put into produc-
tion. The idea is a thought experiment,
but a clarifying one.
The beneÃt o” such standards is that
systems would undergo inspections to

Free download pdf