Time - USA (2022-01-31)

(Antfer) #1
81

the AI was not only often wrong, but also danger-
ously biased: it was more likely to rate Black de-
fendants who did not reoffend as “high risk,” and
to rate white defendants who went on to reoffend
as “low risk.” The results showed that when an AI
system is trained on historical data that reflects
inequalities —as most data from the real world
does—the system will project those inequalities
into the future.
When she read the story, Gebru thought about
not only her own experience with police, but also
the overwhelming lack of diversity in the AI world
she had experienced so far. Shortly after attending
a conference in 2015, where she was one of only
a few Black attendees, she put her thoughts into
words in an article that she never published. “I am
very concerned about the future of AI,” she wrote.
“Not because of the risk of rogue machines taking
over. But because of the homogeneous, one-dimen-
sional group of men who are currently involved in
advancing the technology.”
By 2017, Gebru was an AI researcher at Micro-
soft, where she co-authored a paper called Gen-
der Shades. It demonstrated how facial-recognition
systems developed by IBM and Microsoft were al-
most perfect at detecting images of white people,
but not people with darker skin, particularly Black
women. The data set that had been used to train
the algorithm contained lots of images of white
men, but very few of Black women. The research,
which Gebru had worked on alongside Joy Buol-
amwini of MIT Media Lab, forced IBM and Micro-
soft to update their data sets.
Google hired Gebru shortly after Gender Shades
was published, at a time when Big Tech compa-
nies were coming under increasing scrutiny over
the ethical credentials of their AI research. While

Gebru was interviewing, a group of Google em-
ployees were protesting the company’s agreement
with the Pentagon to build AI systems for weapon-
ized drones. Google eventually canceled the con-
tract, but several employees who were involved
in worker activism in the wake of the protests say
they were later fired or forced out. Gebru had res-
ervations about joining Google, but believed she
could have a positive impact. “I went into Google
with my eyes wide open in terms of what I was
getting into,” she says. “What I thought was, This
company is a huge ship, and I won’t be able to
change its course. But maybe I’ll be able to carve
out a small space for people in various groups who
should be involved in AI, because their voices are
super important.”
After a couple of years on the job, Gebru had re-
alized that publishing research papers was more
effective at bringing about change than trying to
convince her superiors at Google, whom she often
found to be intransigent. So when co- workers
began asking her questions about the ethics of large
language models, she decided to collaborate on a
paper about them. In the year leading up to that de-
cision, the hype around large language models had
led to a palpable sense of enthusiasm across Sili-
con Valley. In a stunt a couple of months earlier, the
Guardian published an op-ed written by a large lan-
guage model called GPT-3 from a Microsoft-backed
company, OpenAI. A robot wrote this entire
Article. Are you scAred yet, humAn? asked
the headline. Investment was flooding into tech
firms’ AI research teams, all of which were compet-
ing to build models based on ever bigger data sets.
To Gebru and her colleagues, the enthusiasm
around language models was leading the industry
in a worrying direction. For starters, they knew

From left: A
discussion of
predictive policing
in L.A. in 2016; a
2020 demonstration
of a Google AI that
can recognize hands

PATRICK T. FALLON—THE WASHINGTON POST/GETTY IMAGES; DAVID PAUL MORRIS—BLOOMBERG/GETTY IMAGES

Free download pdf