Science - USA (2022-01-14)

(Antfer) #1

INSIGHTS | PERSPECTIVES


GRAPHIC: K. FRANKLIN/

SCIENCE

BASED ON JAMES GIBERT

science.org SCIENCE

of determining threat would likely not be
clear to the public.
This question is not purely academic. In
2016, the Dallas Police Department ended
a 5-hour standoff with a cornered sniper by
using a robot that was modified to deliver a
bomb that ultimately killed the sniper ( 11 ).
This repurposing of the robot shows how,
in certain situations, machines can per-
form dangerous duties and limit the loss
of life of law enforcement, but this isolated
case brings to the forefront the question
of when machines should be used in this
capacity. The rules of conduct and use that
govern their public interactions must con-
sider adverse effects on minority commu-
nities. Laws, regulations, and policies have
to be created or updated to address their
use and any unintended consequences.
In the case of this Digidog, the algo-
rithms may be capturing variables such as
race, gender, ethnicity, and other informa-
tion that could create and reinforce biased
data, leading to discriminatory practices
( 12 ) and violating situational fairness ( 13 ).
In the case of facial recognition algorithms,
the pool of training data may be heavily
biased toward one race, which would lead
the algorithm to make false predictions
when it encounters other races.
Despite these ongoing limitations, cul-
turally aware HMI is possible, of which
we discuss three fundamental aspects.
One is to identify ways to partner di-
rectly with communities of interest. For
example, members of a low-income com-
munity became partners in the research
with leadership roles ( 14 ). This model
has transformative potential for HMI re-
searchers such that communities not typi-
cally represented in HMI data can foster
empathy and be part of the initial discus-
sion, as opposed to being consulted after
product development or excluded com-
pletely. Consider 10 years from now when
an autonomous ride-sharing vehicle is sent
in response to a ride request placed by a
relative or a social worker to pick up pas-
sengers with little education and no prior
exposure to a driverless vehicle. The riders
would need to be prepared beforehand and
ensured of their safety.
Another aspect of culturally aware HMI
is to recruit diverse students to engage in
this research who can also serve as ambas-
sadors to diverse communities. Despite the
great pressures to publish research quickly,
constant reliance on convenience samples
for datasets is problematic and exclusion-
ary. Researchers could recruit and train
students as early as high school on best
practices in human-subject studies and
partner with them to reach diverse com-
munities. For example, one research team


worked with minoritized high school stu-
dents to solve problems related to aerial
drones, which involved natural interactions
with members of their own community ( 15 ).
Culturally aware HMI is also possible
by instituting third-party oversight (for
example, review boards). Its role is to
provide unbiased accountability for how
HMIs are used and deployed to be fair to-
ward and ensure the safety of humanity,
and that diverse participants are included
in the studies. This approach would mir-
ror requirements in pharmaceutical test-
ing, which involves ethical participation,
informed consent in clinical studies, and
preregistration of diverse participants to
ensure efficacy and safety across the gen-
eral population. For the academic com-
munity, funding agencies would need to
require such participation and build the
costs of recruiting and remunerating par-
ticipants into grants. For academic HMI
researchers, it is important to acknowledge
the lack of diversity to date in most human
trials, explain the moral and financial ben-
efit of diversity and inclusion, and let their
own research efforts lead by example.
For small start-up companies, the suc-
cess of a new product is not ensured, and
testing is often limited by cost to special-
ized “beta groups.” However, a diverse
design team is more likely to build in fea-
tures that make an HMI more accessible,
and testing with diverse subjects earlier in
the process can identify design flaws. How-
ever, greater inclusivity of participants is
clearly needed for products targeting pub-
lic applications aimed at surveillance and
security, as well as medical applications.
The developers of technological ad-

vances for the general public must miti-
gate harm, not only by minimizing risks
(such as safety features on vehicles) but
also by educating the public so that they
can make informed decisions, similar to
patients deciding that the side effects of a
drug outweigh its benefits. In the pharma-
ceutical industry, immoral overreaches to
increase profits can have severe financial
consequences because of regulatory over-
sight by government regulators. Similar
regulatory efforts could be applied in HMI
research to ensure a level playing field, but
also carry the risk that only large estab-
lished companies could develop products.
The need to ensure equity is clear, al-
though the path to it is not. The drive to
develop products for high-income users
and the current lack of diversity argue
that private-sector efforts alone are insuf-
ficient. Regulatory oversight is needed that
considers the financial, safety, and privacy
rights of participants. Such efforts require
that the general public, academia, and in-
dustrial communities develop guidelines
to ensure that HMIs that are safe, equita-
ble, and accessible regardless of cultural or
socioeconomic differences. j

REFERENCES AND NOTES


  1. L. Salgado, R. Pereira, I. Gasparini, in Human-Computer
    Interaction: Design and Evaluation, M. Kurosu, Ed.
    (Springer International, 2015), pp. 60–70.

  2. Z. Ishak, A. Jaafar, A. Ahmad, Procedia Soc. Behav. Sci.
    65 , 793 (2012).

  3. C. Foster, R. Heeks, Eur. J. Dev. Res. 25 , 333
    (2013).

  4. P. D. Vascik, R. J. Hansman, N. S. Dunn, J. Air Transport.
    26 , 133 (2018).

  5. A. Howard, Sex, Race, and Robots: How to Be Human in
    the Age of AI (Audible Originals, 2020).

  6. P. Grother, M. Ngan, K. Hanaoka, Face Recognition
    Vendor Test (FRVT): Part 3, Demographic
    Effects (National Institute of Standards and Technology,
    2019).

  7. K. Wiggers, “NIST benchmarks show facial recogni-
    tion technology still struggles to identify Black faces,”
    VentureBeat, 9 September 2020.

  8. B. Allyn, “‘The Computer Got It Wrong’: How Facial
    Recognition Led To False Arrest Of Black Man,” NPR, 24
    June 2020.

  9. S. Bushwick, S c i. A m. (7 May 2021); http://www.
    scientificamerican.com/article/the-nypds-
    robot-dog-was-a-really-bad-idea-heres-what-went-
    wrong/.

  10. A. Selyukh, “Bomb Robots: What Makes Killing In Dallas
    Different and What Happens Next?” NPR, 8 July 2016.

  11. D. Hambling, New Scientist (14 October 2021).

  12. A. Howard, J. Borenstein, Sci. Eng. Ethics 24 , 1521
    (2018).

  13. L. Floridi, J. Cowls, T. C. King, M. Taddeo, Sci. Eng. Ethics
    26 , 1771 (2020).

  14. S. L. Syme, Prev. Chronic Dis. 1 , 1 (2004) ; http://www.cdc.gov/
    pcd/issues/2004/jan/03_0001.htm.

  15. J. Bhuyan et al., TechTrends 64 , 899 (2020).


ACKNOWLEDGMENTS
We thank S. Smith, R. Gibert, S. D. Miller, T. Schweisinger, D.
Cappelleri, P. Papalambros, and P. Davies for their thoughtful
feedback on drafts of this article.

10.1126/science.abf2618

95%

Match

90%

Match

150 14 JANUARY 2022 • VOL 375 ISSUE 6577


Face-recognition disparities
Similarity is evaluated as a percentage between photos
of a white male and a Black female. In the white male
face, the similarity is higher. For the Black female face,
the similarity is lower but still concluded (incorrectly)
that a positive identification has been made.
Free download pdf