Time - USA (2022-01-31)

(Antfer) #1

T E C H N O L O G Y


Three hundred and sixTy-four days afTer
she lost her job as a co-lead of Google’s ethical
artificial intelligence (AI) team, Timnit Gebru is
nestled into a couch at an Airbnb rental in Boston,
about to embark on a new phase in her career.
Google hired Gebru in 2018 to help ensure that
its AI products did not perpetuate racism or other
societal inequalities. In her role, Gebru hired
prominent researchers of color, published several
papers that highlighted biases and ethical risks,
and spoke at conferences. She also began raising
her voice internally about her experiences of rac-
ism and sexism at work. But it was one of her re-
search papers that led to her departure. “I had so
many issues at Google,” Gebru tells TIME over a
Zoom call. “But the censorship of my paper was
the worst instance.”
In that fateful paper, Gebru and her co-authors
questioned the ethics of large language AI
models, which seek to understand and reproduce
human language. Google is a world leader in
AI research, an industry forecast to contribute
$15.7 trillion to the global economy by 2030,
according to accounting firm PwC. But Gebru’s
paper suggested that, in their rush to build bigger,
more powerful language models, companies
including Google weren’t stopping to think about
the kinds of biases being built into them—biases
that could entrench existing inequalities, rather
than help solve them. It also raised concerns
about the environmental impact of the AIs, which
use huge amounts of energy. In the battle for AI
dominance, Big Tech companies were seemingly
prioritizing profits over safety, the authors
suggested, calling for the industry to slow down.
“It was like, You built this thing, but mine is
even bigger,” Gebru recalls of the atmosphere
at the time. “When you have that attitude,


you’re obviously not thinking about ethics.”
Gebru’s departure from Google set off a fire-
storm in the AI world. The company appeared to
have forced out one of the world’s most respected
ethical AI researchers after she criticized some of
its most lucrative work. The backlash was fierce.
The dispute didn’t just raise concerns about
whether corporate behemoths like Google’s par-
ent Alphabet could be trusted to ensure this tech-
nology benefited humanity and not just their
bottom lines. It also brought attention to impor-
tant questions: If artificial intelligence is trained
on data from the real world, who loses out when
that data reflects systemic injustices? Were the
companies at the forefront of AI really listening
to the people they had hired to mitigate those
harms? And, in the quest for AI dominance, who
gets to decide what kind of collateral damage is
acceptable?

For the past decade, AI has been quietly seep-
ing into daily life, from facial recognition to digital
assistants like Siri or Alexa. These largely unregu-
lated uses of AI are highly lucrative for those who
control them, but are already causing real-world
harms to those who are subjected to them: false
arrests; health care discrimination; and a rise in
pervasive surveillance that, in the case of policing,
can disproportionately affect Black people and dis-
advantaged socioeconomic groups.
Gebru is a leading figure in a constellation of
scholars, activists, regulators and technologists
collaborating to reshape ideas about what AI is
and what it should be. Some of her fellow travel-
ers remain in Big Tech, mobilizing those insights
to push companies toward AI that is more ethical.
Others, making policy on both sides of the Atlantic ,
are preparing new rules to set clearer limits on the

BIGGER


THAN BIAS


By Billy Perrigo


Timnit Gebru helped expose
how artificial intelligence
replicates prejudice. She’s not
waiting for Big Tech to fix it

Gebru in
Boston,
on Dec. 1, 2021
Free download pdf