Skeptic March 2020

(Wang) #1

demonstrates in some detail later in
the book.
Mitchell briefly discusses the fa-
mous Turing Test, which the seminal
mathematician Alan Turing proposed
in 1950 (calling it the Imitation Game)
as a way to operationalize the trouble-
some concept of intelligence. This is
equivalent to saying that NASA would
have successfully landed on the moon if
only it had done a better job of faking
it, as some conspiracy theorists insist
they tried to do. Mitchell points out
how easily the Turing Test has been
“passed” with naive testers using trivial
chatbots, but doesn’t herself pass judg-
ment on the validity of the test itself.
In Part II, Mitchell addresses the
difficulty of image recognition and ma-
chine learning—particularly deep learn-
ing. She begins with a black-and-white
version of this photograph (Figure 1).
Although there are a few recogniza-
ble entities in the photograph—a dog, a
woman wearing a camouflage suit, a
bouquet, a laptop, a Welcome Home!
Balloon, a suitcase, an American flag,
and so on—this photograph tells a story,
which is why it was it deemed one of the
50 best military photos of 2015. No AI
program in existence can even begin to
make sense of it. For example, Microsoft’s
CaptionBot, which makes the claim that
“I can understand the content of any
photograph and I’ll try to describe it as
well as any human,” fails repeatedly with
this photograph, giving the excuse “I
seem to be under the weather right now.


Try again later.” When I performed a
Google image search on it, Google found
only the dog. Having decided that, Google
provided a huge number of “very simi-
lar” images, in which the only element in
common was a dog.
This is just the introduction to
Mitchell’s deeper discussion of image
recognition, how deep learning pro-
grams work, and the fragility of their
apparent success. Apparently well-
known in the AI community but little
known outside of it (and to me), you
can take a recognizable image and turn
it to an apparently identical one that
will be identified with great confidence
by AI programs as whatever you like,

including something entirely different.
Mitchell gives several examples, in-
cluding this pair (Figure 2). Perhaps
even more stunning are the examples
of TV snow and abstract geometric pat-
terns identified (again, with great con-
fidence) as specific entities (Figure 3).
As Mitchell concludes, this re-
veals that these image recognition
programs aren’t learning what we think
they’re learning.
For some reason, Mitchell places the
chapter entitled “On Trustworthy and
Ethical AI” here in Part II (Looking and
Learning) rather than in the concluding
section where it more properly belongs.
The 13 pages she devotes are inadequate

volume 25 number 1 2020 W W W. S K E P T I C. C O M 5 7

Figure 1—A richly detailed story-telling photograph. AI image recognition pro-
grams have difficulty interpreting it beyond recognizing that it contains a dog.
U.S. Air Force photo by Airman 1st Class Erica Crossen.

Figure 2: A recognizable image can be imperceptibly altered so that it will be misidentified by image recognition software.
(Left): Original: photograph correctly identified as a school bus. (Center): Differences to be applied the original image to
cause misidentification). (Right): Photograph apparently of a school bus incorrectly identified as an ostrich. (See “Intriguing
Properties of Neural Networks” for discussion.)

Free download pdf