Time - USA (2022-01-31)

(Antfer) #1

82 Time January 31/February 7, 2022


TECHNOLOGY


that despite appearances, these AIs were nowhere
near sentient. The paper compared the systems to
“parrots” that were simply very good at repeating
combinations of words from their training data.
This meant they were especially susceptible to
bias. Part of the problem was that in the race to
build ever bigger data sets, companies had begun
to build programs that could scrape text from the
Internet to use as training data. “This means that
white supremacist and misogynistic, ageist, etc.,
views are overrepresented,” Gebru and her col-
leagues wrote in the paper. At its core was the same
maxim that had underpinned Gebru and Buolam-
wini’s facial-recognition research: if you train an
AI on biased data, it will give you biased results.
The paper that Gebru and her colleagues
wrote is now “essentially canon” in the field of
responsible AI, according to Rumman Chowd-
hury, the director of Twitter’s machine-learning
ethics, transparency and account-
ability team. She says it cuts to the
core of the questions that ethical AI
researchers are attempting to get
Big Tech companies to reckon with:
“What are we building? Why are we
building it? And who is it impacting?”
But Google’s management was not
happy. After the paper was submit-
ted for an internal review, Gebru was
contacted by a vice president, who
told her the company had issues with
it. Gebru says Google initially gave
vague objections, including that the
paper painted too negative a picture
of the technology. (Google would later
say the research did not account for
safeguards that its teams had built to
protect against biases, or its advance-
ments in energy efficiency. The com-
pany did not comment further for this story.)
Google asked Gebru to either retract the paper
or remove from it her name and those of her Google
colleagues. Gebru says she replied in an email say-
ing that she would not retract the paper, and would
remove the names only if the company came clean
about its objections and who exactly had raised
them—otherwise she would resign after tying up
loose ends with her team. She then emailed a group
of women colleagues in Google’s AI division sep-
arately, accusing the company of “silencing mar-
ginalized voices.” On Dec. 2, 2020, Google’s re-
sponse came: it could not agree to her conditions,
and would accept her resignation. In fact, the email
said, Gebru would be leaving Google immediately
because her message to colleagues showed “behav-
ior that is inconsistent with the expectations of a
Google manager.” Gebru says she was fired; Google
says she resigned.

In an email to staff after Gebru’s departure, Jeff
Dean, the head of Google AI, attempted to reas-
sure concerned colleagues that the company was
not turning its back on ethical AI. “We are deeply
committed to continuing our research on topics
that are of particular importance to individual and
intellectual diversity,” he wrote. “That work is crit-
ical and I want our research programs to deliver
more work on these topics—not less.”

Today, The idea that AI can encode the biases
of human society is not controversial. It is taught
in computer science classes and accepted as fact
by most AI practitioners, even at Big Tech com-
panies. But to some who are of the same mind
as Gebru, it is only the first epiphany in a much
broader—and more critical—worldview. The cen-
tral point of this burgeoning school of thought is
that the problem with AI is not only the ingrained
biases in individual programs, but
also the power dynamics that under-
pin the entire tech sector. In the con-
text of an economy where founders of
platforms like Amazon, Google and
Facebook have amassed more wealth
than near anybody else in human his-
tory, proponents of this belief see AI
as just the latest and most power-
ful in a sequence of tools wielded by
capitalist elites to consolidate their
wealth, cannibalize new markets, and
penetrate ever more deeply into the
private human experience in pursuit
of data and profit.
To others in this emerging nexus
of resistance, Gebru’s ouster from
Google was a sign. “Timnit’s work
has pretty unflinchingly pulled back
the veil on some of these claims, that
are fundamental to these companies’ projections,
promises to their boards and also to the way they
present themselves in the world,” says Meredith
Whittaker, a former researcher at Google who re-
signed in 2019 after helping lead worker resis-
tance to its cooperation with the Pentagon. “You
saw how threatening that work was, in the way
that Google treated her.”
Whittaker was recently appointed as a senior ad-
viser on AI to the Federal Trade Commission (FTC).
“What I am concerned about is the capacity for so-
cial control that [AI] gives to a few profit-driven
corporations,” says Whittaker, who was not speak-
ing in the capacity of her FTC role. “Their interests
are always aligned with the elite, and their harms
will almost necessarily be felt most by the people
who are subjected to those decisions.”
It’s a viewpoint that Big Tech could not dis-
agree with more, but to which European regulators

‘Feeding AI
systems on
the world’s
beauty,
ugliness and
cruelty, but
expecting
it to reflect
only the
beauty is a
fantasy.’
ABEBA BIRHANE AND
VINAY PRABHU, QUOTED
IN GEBRU’S PAPER
Free download pdf