Time - USA (2022-01-31)

(Antfer) #1
83

are also paying attention. The E.U. is currently
scrutinizing a wide-ranging draft AI act. If passed,
it could restrict forms of AI that lawmakers deem
harmful, including real-time facial recognition, al-
though activists say it doesn’t go far enough. Sev-
eral U.S. cities, including San Francisco, have al-
ready implemented facial-recognition bans. Gebru
has spoken in favor of regulation that defines what
kind of uses of AI are unacceptable, and sets bet-
ter guardrails for those that remain. She recently
told European lawmakers scrutinizing the new bill:
“The No. 1 thing that would safeguard us from un-
safe uses of AI is curbing the power of the compa-
nies who develop it.”
She added that increasing legal protections for
tech workers was an essential part of making sure
companies did not create harmful AI, because
workers are often the first line of defense, as in
her case. Progress is being made on this front too. In
October 2021, the Silenced No More Act came into
force in California, preventing big companies from
using NDAs to silence employees who complain
about harassment or discrimination. In January
2021, hundreds of Google workers unionized for
the first time. In the fall, Facebook whistle- blower
Frances Haugen disclosed thousands of pages of in-
ternal documents to authorities, seeking whistle-
blower protection under federal law.
Gebru sees her research institute DAIR as an-
other organ within this wider push toward tech
that is socially responsible, putting the needs of
communities ahead of the profit incentive and ev-
erything that comes with it. At DAIR, Gebru will
work with researchers around the world across
multiple disciplines to examine the outcomes of

AI technology, with a particular focus on the Af-
rican continent and the African diaspora in the
U.S. One of DAIR’s first projects will use AI to an-
alyze satellite imagery of townships in South Af-
rica, to better understand legacies of apartheid.
DAIR is also working on building an industry-
wide standard that could help mitigate bias in
data sets, by making it common practice for re-
searchers to write accompanying documentation
about how they gathered their data, what its limi-
tations are and how it should (or should not) be
used. Gebru says DAIR’s funding model gives it
freedom too. DAIR has received $3.7 million from
a group of big philanthropists including the Ford,
MacArthur and Open Society foundations. It’s a
novel way of funding AI research, with few ties
to the system of Silicon Valley money and patron-
age that often decides which areas of research are
worthy of pursuit, not only within Big Tech com-
panies, but also within the academic institutions
to which they give grants.
Even though DAIR will be able to conduct
only a small handful of studies, and its funding
pales in comparison with the money Big Tech
is prepared to spend on AI development, Gebru
is optimistic. She has already demonstrated the
power of being part of a collective of engaged col-
laborators working together to create a future in
which AI benefits not just the rich and power-
ful. They’re still the underdogs, but the impact of
their work is increasing. “When you’re constantly
trying to convince people of AI harms, you don’t
have the space or time to implement your ver-
sion of the future,” Gebru says. “So we need
alternatives.” — With reporting by Nik PoPli 

A facial-recognition
AI that can identify
individuals in a
crowd, on show
at the 2019 CES
convention in
Las Vegas

DAVID MCNEW—AFP/GETTY IMAGES

Free download pdf