Science - USA (2022-01-14)

(Antfer) #1
SCIENCE science.org

By Tahira Reid and James Gibert

H

uman–machine interactions (HMIs)
describe how humans engage vari-
ous systems, including those that
are smart, autonomous, or both.
Most HMIs either allow the human
to control the machine (an instru-
ment panel), allow the machine to obtain
data (a heart monitor), or even both (a
virtual reality setup). HMIs may be placed
in three broad classes. In one class, the in-
dividual is active in the interaction—that
is, the individual is the user or purchaser
of a technology such as an automobile. In
another class, the user is passive but con-
senting in the interaction—that is, the in-
teraction occurs with their consent, such
as the use of devices for medical diagnosis.
There is also a class in which the user is
passive and nonconsenting in the interac-
tion, such as the use of facial recognition
for law enforcement purposes.
The human in HMI implies inclusion
regardless of ability, ethnicity, race, or so-
cial class, but the current research para-
digm shows many examples of exclusion.
The processes and procedures that govern
HMI do not account for the heterogeneity
that exists in human beings included in the
data. However, a design science paradigm
can inform the creation of the interfaces
between humans and machines and their
embedding in our natural, virtual, psycho-
logical, economic, and social environment.
Efforts to overcome implicit biases being
propagated through the HMI system re-
quire team members who can challenge
cultural hegemony, along with great gover-
nance of design and operation.
These biases may manifest themselves in
HMI in several ways when developing and
implementing these systems. Imposing the
cultural norms of an industrialized soci-
ety in technology deployed in developing
countries can lead to failure. For example,
a more expensive recyclable product might
still be disposed if the infrastructure in a
developing country required for collection
and transportation is not available and the
users are not informed about the process
( 1 ). Differences between the cultures of

nations can inhibit technology adoption,
such as cultural assumptions in computer-
user interfaces ( 2 ). Even within an indus-
trialized country, the cultural norms of the
upper socioeconomic class might accept
a technology such as fingerprint, retinal,
or voice recognition on a cell phone that
could be deemed suspicious to lower socio-
economic class communities.
The cultural norms of an industrialized
society often follow a traditional innovation
process that targets high-income stakehold-
ers versus an inclusive innovation process
that also considers low-income stakehold-
ers ( 3 ). These two approaches differ in how
technologies are developed, whom they
benefit most, and how stakeholders learn
and engage with them. Urban air-mobility
systems are a good example of how high-
income communities are prioritized, such
as in decisions about where airports are
located in terms of noise and access, as evi-
denced by the income parameters included
in their models ( 4 ).
In many countries, the focus is often on
HMI where the individual is active in the
interaction. Lower socioeconomic classes,
which often include a greater fraction of
ethnic minorities and immigrants, repre-
sent a demographic whose needs should
be included in the design and trial phase
of HMI systems. However, the population
of individuals often reflected in HMI stud-
ies are commonly affluent and educated.
The pressures of conducting research in a
timely manner, either for publishing find-
ings or delivering products, cause many re-
searchers to work with study participants
from campus or corporate communities
out of convenience.
More severe consequences can result
when the humans in an HMI system are
subjects upon which data are taken—that
is, they are passive and either consenting
or nonconsenting in the interactions. In
areas where subjects give consent, such as
medical testing, biases in data recording
and interpretation can result by not being
intentional about human heterogeneity,
both technically and culturally. For exam-
ple, virtual doctor visits for skin conditions
such as rashes or wound healing often are
designed around white skin subjects and
can fail for dark skin.
Biased algorithms do not work for all
skin tones, ethnic groups, or genders ( 5 ).

Despite all of the advances in algorithm
design, facial recognition algorithms have
several known cultural problems in the
basic identification of various faces from
different races regardless of the country in
which the algorithm is developed (see the
figure). The dangers of bias and disregard
of cultural and social considerations were
identified by researchers at the National
Institute of Standards and Technology
(NIST) who noted that like humans, facial
algorithms exhibit bias ( 6 ), also known
as “algorithmic bias,” according to digital
activist Joy Buolamwini. As an example,
algorithms developed by East Asian and
Western countries identified faces from
the races of developers more accurately
than faces from other races.
Furthermore, recent data suggest that
algorithms struggle to identify people of
color. For example, ongoing work from
NIST indicates that facial recognition soft-
ware used by the Detroit Police Depart-
ment misidentifies Black males between
one and two times in 1000, and Black fe-
males between two and three times in 1000
( 7 ). The numbers may appear insignificant
but considering that an estimated 79% of
Detroit’s 4 million residents are Black, er-
rors of this kind have led to at least one
documented wrongful arrest ( 7 , 8 ).
Deploying new technologies without
engagement from key stakeholders is a
prescription for failure. For example, from
2020 to 2021, the New York Police Depart-
ment field tested a robotic dog (named
“Digidog”), created by Boston Dynam-
ics, in Brooklyn, Queens, and the Bronx.
Although it was helpful in performing
various tasks for this police force, the com-
munity in which it was embedded was not
informed or educated about its use, func-
tionality, and deployment ( 9 ). Employing
this technology in communities is espe-
cially problematic given the historical use
of dogs to capture slaves and as tools to
terrorize civil rights protestors. The Digi-
dog has been removed, but a greater threat
lies on the horizon with a robot showcased
at the annual Association of the United
States Army, which has placed sniper rifles
on robotic dogs developed by Ghost Robot-
ics ( 10 ). Although the current program is
confined to military use, it is conceivable
that in the future it may be adapted for ci-
vilian police use. If it is adapted, the basis

ENGINEERING

Inclusion in human–machine interactions


Human–machine interactions research should include diverse subjects and benefit all people


School of Mechanical Engineering, Purdue
University, 585 Purdue Mall, West Lafayette, IN, USA.
Email: [email protected]

14 JANUARY 2022 • VOL 375 ISSUE 6577 149
Free download pdf