Time - USA (2022-01-31)

(Antfer) #1

80 Time January 31/February 7, 2022


companies benefiting most from automated abuses
of power. Gebru herself is seeking to push the AI
world beyond the binary of asking whether sys-
tems are biased and to instead focus on power:
who’s building AI, who benefits from it, and who
gets to decide what its future looks like.
The day after our Zoom call, on the anniversary
of her departure from Google, Gebru launched the
Distributed AI Research (DAIR) Institute, an in-
dependent research group she hopes will grapple
with how to make AI work for everyone. “We need
to let people who are harmed by technology imag-
ine the future that they want,” she says.

When Gebru Was a teenaGer, war broke out
between Ethiopia, where she had lived all her life,
and Eritrea, where both her parents were born. It
became unsafe for her to remain in Addis Ababa,
the Ethiopian capital. After a “miserable” experi-
ence with the U.S. asylum system, Gebru finally
made it to Massa chusetts as a refugee. Immedi-
ately, she began experiencing racism in the Ameri-
can school system, where even as a high- achieving
teenager she says some teachers discriminated
against her, trying to prevent her taking certain
AP classes. Years later, it was a pivotal experience
with the police that put her on the path toward
ethical technology. She recalls calling the cops after
her friend, a Black woman, was assaulted in a bar.
When they arrived, the police handcuffed Gebru’s
friend and later put her in a cell. The assault was
never filed, she says. “It was a blatant example of
systemic racism.”
While Gebru was a Ph.D. student at Stanford
in the early 2010s, tech companies in Silicon Val-
ley were pouring colossal amounts of money into
a previously obscure field of AI called machine

learning. The idea was that with enough data and
processing power, they could teach computers to
perform a wide array of tasks, like speech recogni-
tion, identifying a face in a photo or targeting peo-
ple with ads based on their past behavior. For de-
cades, most AI research had relied on hard-coded
rules written by humans, an approach that could
never cope with such complex tasks at scale. But
by feeding computers enormous amounts of data—
now available thanks to the Internet and smart-
phone revolutions—and by using high-powered
machines to spot patterns in those data, tech com-
panies became enamored with the belief that this
method could unlock new frontiers in human prog-
ress, not to mention billions of dollars in profits.
In many ways, they were right. Machine learn-
ing became the basis for many of the most lucrative
businesses of the 21st century. It powers Amazon’s
recommendation engines and warehouse logistics
and underpins Google’s search and assistant func-
tions, as well as its targeted advertising business.
It also promises to transform the terrain of the fu-
ture, offering tantalizing prospects like AI lawyers
who could give affordable legal advice or AI doc-
tors who could diagnose patients’ ailments within
seconds, or even AI scientists.
By the time she left Stanford, Gebru knew she
wanted to use her new expertise to bring ethics
into this field, which was dominated by white men.
She says she was influenced by a 2016 ProPublica
investigation into predictive policing, which de-
tailed how courtrooms across the U.S. were adopt-
ing software that offered to predict the likelihood
of defendants reoffending in the future, to advise
judges during sentencing. By looking at actual
reoffending rates and comparing them with the
software’s predictions, ProPublica found that

TECHNOLOGY

Free download pdf