New Scientist - USA (2019-10-12)

(Antfer) #1
gallery at the Barbican in London
in a 50-metre-long collage.
The entry point is images
labelled “apple” – a category that,
unsurprisingly, yields mostly
pictures of apples – but the piece
then works through increasingly
abstract and controversial
categories such as “sister” and
“racist”. (Among the “racists” are
Roger Moore and Barack Obama;
my guess is that being over-
represented in a data set carries its
own set of risks.) Paglen explains:
“We can all look at an apple and
call it by its name. An apple is
an apple. But what about a noun
like ‘sister’, which is a relational
concept? What might seem like a
simple idea – categorising objects
or naming pictures – quickly
becomes a process of judgement.”
The final category in the show
is “anomaly”. There is, of course,
no such thing as an anomaly in
nature. Anomalies are simply
things that don’t conform to the
classification systems we set up.
Halfway along the vast, gallery-
spanning collage of photographs,
the slew of predominantly natural
and environmental images peters
out, replaced by human faces.
Discrete labels here and there
indicate which of ImageNet’s
categories are being illustrated.
At one point of transition, the
group labelled “bottom feeder”
consists entirely of headshots
of media figures – there isn’t one
aquatic creature in evidence.
Scanning From “Apple” to
“Anomaly” gives gallery-goers
many such unexpected,
disconcerting insights into
the way language parcels up
the world. Sometimes, these
threaten to undermine the piece
itself. Passing seamlessly from
“android” to “minibar”, one might
suppose that we are passing from
category to category according
to the logic of a visual algorithm.

30 | New Scientist | 12 October 2019


Exhibition
Trevor Paglen: From
“Apple” to “Anomaly”
Barbican Centre, London
Until 16 February 2020

A COUPLE of days before the
opening of Trevor Paglen’s latest
photographic installation, From
“Apple” to “Anomaly”, a related
project by the artist found itself
splashed all over the papers.
ImageNet Roulette is an online
collaboration with artificial
intelligence researcher Kate
Crawford at New York University.
The website invites you to
provide an image of your face.
An algorithm will then compare
your face against a database called
ImageNet and assign you to one
or two of its 21,000 categories.
ImageNet has become one of
the most influential visual data
sets in the fields of deep learning
and AI. Its creators at Stanford,
Princeton and other US
universities harvested more
than 14 million photographs
from photo upload sites and
other internet sources, then had
them manually categorised
by some 25,000 workers on
Amazon’s crowdsourcing labour
site Mechanical Turk. ImageNet is
widely used as a training data set
for image-based AI systems and is
the secret sauce within many key
applications, from phone filters
to medical imaging, biometrics
and autonomous cars.
According to ImageNet
Roulette, I look like a “political
scientist” and a “historian”. Both
descriptions are sort-of-accurate
and highly flattering. I was
impressed. Mind you, I’m a
white man. We are all over the
internet, and the neural net had
plenty of “my sort” to go on.
Spare a thought for Guardian

Impaired visions


Trevor Paglen’s latest installation reveals the ethical gulf that yawns when
we leave computers to organise the world for us, says Simon Ings

journalist Julia Carrie Wong,
however. According to ImageNet
Roulette she was a “gook” and
a “slant-eye”. In its attempt to
identify Wong’s “sort”, ImageNet
Roulette had innocently turned
up some racist labels.
From “Apple” to “Anomaly”
also takes ImageNet to task.
Paglen took a selection of
35,000 photos from ImageNet’s
archive, printed them out and
stuck them to the wall of the Curve

Views Culture


TIM

P.^
WH

ITB

Y/G

ET
TY
IM

AG

ES^

FO
R^ B

AR
BIC

AN
CE

NT
RE

From “Apple” to “Anomaly”
has photos arranged in
intriguing categories
Free download pdf