THE WALL STREET JOURNAL. Monday, February 24, 2020 |R5
JOURNAL REPORT |BIG ISSUES
ten. But some of the error rates be-
tween racial groups are so low that
slight differences seem huge. Con-
sider the results of a recent govern-
ment study that found in one algo-
rithm an error rate of 0.01% for
white males and 0.79% for Ameri-
can Indian females—the rate for
one group is 79 times larger, but
still less than 1%.
More broadly, this critique as-
sumes that because some systems
have bias, all of them must. As the
study above found, some of the
best-performing algorithms had
lower error rates on black females
than on white males. Obviously,
government agencies should set
standards so that they procure
only the best-performing systems
that have undergone rigorous pub-
lic testing.
Facial-recognition detractors
also point to hypothetical harms to
disadvantaged communities and ig-
nore tangible benefits, notably, bet-
ter policing to address crimes that
might otherwise go unsolved in
those communities. One important
use of facial recognition is to find
victims of human trafficking, a
crime that disproportionately af-
fects people of color, immigrants
and the poor.
Who’s watching?
Another target of criticism is over-
sight, or the supposed lack of it.
But any operator of a facial-recog-
nition system must comply with
privacy laws. Moreover, every gov-
ernment agency that uses facial
recognition is subject to account-
ability measures.
Could police misuse facial recog-
nition? Of course. This calls for
more oversight and transparency,
not fewer resources to limit their
effectiveness. Law-enforcement
agencies should, for instance, be
required to disclose when they use
the technology, how they obtain
images and how long they keep
them. Lawmakers can place reason-
able limits on the use of facial rec-
ognition—such as requiring a war-
rant to use facial recognition to
track someone.
Rather than setting down road-
blocks to further innovation and
use, we need guardrails to ensure
the public and private sectors use
the technology safely and respon-
sibly.
Mr. Castrois vice president of
the Information Technology and
Innovation Foundation, a think
tank for science and technology
policy, and director of the
foundation’s Center for Data
Innovation. He can be reached at
[email protected].
‘Law enforcement
is mostly using
facial recognition
for routine
activities such as
to secure ports
of entry.’
The Technology Is Both Biased
And Prone to Errors
Despite the Cries of Alarm,
The Technology Is Mostly Beneficial
JIM LO SCALZO/EPA-EFE/REX/SHUTTERSTOCK
L
AWMAKERS who care
about constitutional rights
and widening inequality
must halt the use of facial
recognition in sensitive
domains by government and com-
mercial actors.
These systems are biased in how
they work and how they are de-
ployed, and they are often created
and implemented in obscurity,
without public review or full ac-
countability under the law.
Most systems are developed by
private companies that license them
to governments or businesses. Com-
panies hide behind claims of trade
secrecy, entering into contracts that
bypass public-disclosure provisions
and allow facial recognition to be
deployed in secret. Researchers, law-
makers and the public rarely have
access to examine these systems or
answer critical questions.
When researchers do get access
to facial-recognition technology,
they find it doesn’t work as adver-
tised. A growing body of evidence
shows that facial recognition is fre-
quently biased and error-prone,
with a recent U.S. government au-
dit confirming that some systems
were 100 times more likely to be in-
accurate for black and Asian people
than for white people.
While some proponents of facial
recognition claim these inaccura-
cies aren’t important, since there
are also systems that government
tests show to be “accurate,” this
does not stand up to scrutiny. Gov-
ernment audits, like the one cited
here, are performed in controlled
settings in a laboratory. Real-world
uses are often much messier, rely-
ing on blurry or bad images, with
recent reports from the George-
town Center on Privacy and Tech-
nology showing routine cases of
use that falls outside of safe stan-
dard operating procedure.
The power structure
But inaccuracy is far from facial
recognition’s only problem. Facial
recognition is generally applied by
those who already have power—like
employers, landlords and the po-
lice—to surveil and in some cases
oppress those who don’t. And be-
cause there are no restrictions on
data sharing between private users
and government, regulations that
focus solely on government use
don’t account for the full threat of
this technology.
We already see alleged abuse of
power in practice, in cases like the
Baltimore Police Department and
Freddie Gray protesters. The police
reportedly scanned photos to target
people with outstanding warrants,
according to material posted on the
ACLU of Northern California web-
‘These systems
allow government
and businesses to
intrude into our
lives without
detection.’
A facial-recognition
system in use at Dulles
Airport outside
Washington.
site—chilling free speech and un-
dermining the right to due pro-
cess. (The Baltimore Police
Department and mayor’s office
did not respond to requests for
comment.)
While proponents of facial rec-
ognition focus on beneficial uses,
often pointing to the convenience
of unlocking our phones with our
faces or invoking hypothetical
public-safety improvements, they
rarely ask what we’re trading in
exchange for such convenience,
nor do they acknowledge the lack
of evidence supporting claims
that facial recognition increases
public safety.
It is difficult to think of an in-
dustry where we permit compa-
nies to treat the public as experi-
mental subjects, deploying
untested, and faulty technology
that has been proved to violate
civil rights and to amplify bias
and discrimination. The dangers
of bias and error have serious im-
plications even when the systems
are automating “routine” func-
tions like check-ins at airports.
A call to halt
The idea that we can continue de-
ploying this technology, assuming
“the right rules” will prevent
harm, isn’t supported by evidence.
Technical advancement has his-
torically outpaced the law, and
these systems allow government
and businesses to intrude into our
lives without detection, threaten-
ing our constitutional rights and
enabling suspicionless surveil-
lance and social control.
It is time for lawmakers to halt
government and commercial use
of facial recognition in sensitive
domains. Once this is done, re-
search assessing facial-recogni-
tion’s risks can be undertaken.
Ms. Whittakeris co-founder of
the AI Now Institute at New
York University, which studies
the social implications of
artificial intelligence. She can be
reached [email protected].
W
HEN ITcomes to
new technologies,
it is easy to imag-
ine the worst.
In the 1960s, for
example, many people feared tran-
sistors would spell the end of pri-
vacy, with miniature electronics
used to eavesdrop on private con-
versations.
Today, we see alarm about fa-
cial recognition. But just as tran-
sistors didn’t give rise to wide-
spread eavesdropping, neither will
facial recognition lead to perva-
sive surveillance.
The most common facial-recog-
nition applications are benign.
Millions of Americans use facial
recognition to secure their mobile
phones and tag photos. Mean-
while, there are countless exam-
ples of how facial recognition is
reducing financial fraud at banks,
preventing medical errors in hos-
pitals, protecting small businesses
from repeated theft and robbery,
and improving security and con-
venience at schools and airports.
On the case
The biggest area of controversy is
law enforcement. Some people
fear that facial recognition will
supercharge surveillance net-
works. But Fourth Amendment
protections, coupled with deeply
held views about civil liberties,
put limits on what government
can do.
Aside from a few scenarios,
such as quickly locating an ab-
ducted child or a terrorist, closely
tracking people’s movements isn’t
being considered domestically.
Law enforcement is mostly using
facial recognition for routine ac-
tivities such as to secure ports of
entry and identify victims in
crimes. For example, immigration
officers can check whether travel-
ers’ faces match the passports
they are holding.
Charges of bias or error in the
technology are overblown. Critics,
for instance, say that some racial
groups are misidentified more of-
A Closer Look
State and local measures to limit use
of facial-recognition technology date
back to at least 2016 and have
increased in the past year.
Source: Electronic Frontier Foundation
*These state measures prohibit the use of
facial-recognition technology in conjunction with police
body cameras and, in California, with other mobile
devices.
Passed in June 2016
December
BANON
MUNICIPALUSE
May
San Francisco
June
Oakland, Calif.
October
Brookline, Mass.
Northampton,
Mass.
MORATORIUMON
SOMEPOLICEUSE*
October
California
(through end
of 2022)
Berkeley, Calif.
BANON
SOMEPOLICEUSE*
New Hampshire
2017
Oregon
Somerville, Mass.
2019
2020
January
Cambridge, Mass.
Facial recognition is becoming an im-
portant tool in a range of consumer, busi-
ness and law-enforcement uses.
But is it accurate—and fair?
Facial-recognition software promises to speed
up the job of determining, or verifying, people’s
identity by rapidly checking faces against a data-
base. U.S. Customs and Border Protection has
been testing the technology at airports, using it
to verify the identity of people entering the U.S.
as well as departing travelers. Other law-en-
forcement agencies, such as local po-
lice forces, are using the technol-
ogy to identify suspects or
missing people.
Facial recognition is also
showing up in business
and consumer applica-
tions. Some airlines are
using it to speed up
check-in, for instance,
and some electronic de-
vices use it as a security
measure.
But as the systems spread, civil-liberties ad-
vocates and other groups are calling for a mora-
torium on the use of the technology.
Critics point to several incidents of alleged
abuse of the technology by police. Meanwhile, a
far-reaching government analysis of the most
widely used facial-recognition algorithms found
most of them misidentified Asian-Americans
and African-Americans far more often than Cau-
casians. The research, conducted by the National
Institute of Standards and Technology—a labora-
tory affiliated with the Commerce Department—
also found significant differences in accuracy
when an algorithm is used to compare two
photos to determine whether it is the same
person.
Due to concerns about reliability, some
makers of law-enforcement technology
have said they aren’t including facial rec-
ognition in their products. Government
pressure is growing, as well, with some lo-
cal lawmakers restricting use of the tech-
nology and some members of Congress
voicing opposition.
ShouldGovernment
HalttheUseof
Facial-Recognition
Technology?
BY MEREDITH
WHITTAKER
BY DANIEL
CASTRO
No
Yes