2_5256034058898507033

(Kiana) #1
Ready for Robots?

July/August 2019 197


then moves to more speciÃc features. It
might start to analyze an image by iden-
tifying an edge, and then identifying a
shape, and then identifying spots on the
surface o” the shape. In this way, it can
eventually detect the contents o” an
image. After pattern matching from an
enormous batch o” previously inputted
images (whose contents are usually
identiÃed and labeled), the system can
predict the contents with a high prob-
ability o” success. Hence, a deep-learning
system can identify a cat without
having to be told which speciÃc features
to look for, such as whiskers or pointy
ears. Those features are captured by the
system itself, through a series o” dis-
crete statistical functions. The system is
trained by the data, not programmed.
Its answers are inferences.
And it works. That’s the good news.
The bad news is that the mathematical
functions are so complex that it is
impossible to say how a deep-learning
machine obtained its result. There is
such a jumble o” dierent paths that
can lead to a decision that retracing the
machine’s steps is basically infeasible.
Moreover, the system can be designed
to improve based on feedback, so unless
one freezes its performance and pre-
vents such changes, it is impossible to
review how it reached its output. As
George Dyson, a historian o” comput-
ing, writes in his essay, “Any system
simple enough to be understandable
will not be complicated enough to
behave intelligently, while any system
complicated enough to behave intelli-
gently will be too complicated to
understand.” Althoughåa lot o” research
is going into “explainable ¬Ÿ,” so far
the math bears out what might be
named “Dyson’s Law.”

how a system works; it is whether the
system’s operation is knowable at all. As
Judea Pearl, a major Ãgure in computer
science and statistics, writes in his contri-
bution: “Deep learning has its own
dynamics, it does its own repair and its own
optimization, and it gives you the right
results most o” the time. But when it
doesn’t, you don’t have a clue about what
went wrong and what should be Ãxed.”
Nontransparent systems can reach
correct answers: human minds occasion-
ally do get things right, after all. But
with ¬Ÿ, i” the system fails, it might do so
in unexpected, mysterious, and cata-
strophic ways. I” we cannot understand
how it works, can we fully trust it? This
is dierent from ¬Ÿ’s “black box” prob-
lem, in which bias in the data may lead
to unfair outcomes, such as discrimina-
tory loan, hiring, or sentencing decisions.
That’s a problem that is possible to Ãx
by requiring, as a Ãrst step, that such
systems are open to inspection by a
competent authority. But the fundamen-
tal unknowability o” ¬Ÿ systems presents
a deeper, more unsettling problem. The
scientiÃc project emerged in the seven-
teenth century when empirical evidence
was placed above knowledge based on
faith, which at the time was usually
sanctioned by the Catholic Church. Does
the advent o” ¬Ÿ mean we need to place
our trust once again in a higher power
that we cannot interrogate for answers?
The trouble is that the mathematics
behind deep learning is inherently
obscure. Deep-learning systems (also
known as “neural networks,” since they
are loosely modeled on the neurons and
connections in the brain) have many
nodes arranged in layers that are all
interconnected. Such a system models
reality at a basic level o” abstraction and

Free download pdf