Scientific American - USA (2012-12)

(Antfer) #1
December 2021, ScientificAmerican.com 45

ers as showing contempt three times as often as white players when
the expression was ambiguous. The problem can likely be traced
back to bias in the labeled images in training data sets, she says.
Micro soft and Face++ did not respond to requests for comment.
Many companies now emphasize that they are aware of and
addressing such issues. Retorio’s algorithm was trained on a data
set, compiled over a period of years using paid volunteers, of short
interview videos labeled with personality traits, co-founder Chris-
toph Hohenberger says. The company has taken steps to filter out
various demographic and cultural biases that would tend to favor
one group over another in the personality assessments, he says.
But because there is currently no regula-
tion or oversight of the industry, in most
cases we have to take a company’s word for
it—the robustness and equity of proprie-
tary data sets are hard to verify. HireVue, a
company that does video interviews with
algorithmic analysis of the text and vocal
tone, brought on an outside auditor to
check for bias, but that is rare.
“This idea that there exists one stan-
dard for humans to be and that everyone
can meet it equally” is fundamentally
flawed, says Ifeoma Ajunwa, an associate
professor at the University of North Caro-
lina School of Law, who studies AI deci-
sion-making. The assumption, she says, means that “everyone
who doesn’t meet that standard is disadvantaged.”

I


n addition to concerns about bias, the idea that outside ap-
pearances match a decipherable inner emotion for everyone
has also started to generate strong scientific opposition. That
is a change from when the concept got its start more than 50 years
ago. At that time Ekman and Friesen were conducting fieldwork
with the Fore, an Indigenous group in the highlands of southeast
Papua New Guinea, to see if they recognized and understood fa-
cial expressions the same way as people from radically different
backgrounds did—a stevedore from Brooklyn, say, or a nurse in
Senegal. Volunteers were shown sets of photos of people making
expressions for what the scientists called the six basic emotions.
To provide context, a translator provided brief descriptors (“He/
she is looking at something which smells bad” for disgust, for in-
stance). The Fore responses were virtually identical to those of
people surveyed in countries such as Japan or Brazil or the U.S.,
so the researchers contended that facial expressions are a uni-
versally intelligible emotional language.
The notion of a shared group of expressions that represented ba-
sic emotional states quickly became popular in psychology and oth-
er fields. Ekman and Friesen developed an atlas of thousands of fa-
cial movements to interpret these expressions, called the Facial Ac-
tion Coding System (FACS). Both the atlas and the theory became
cornerstones of emotion AI. The work has been incorporated into
many AI applications, such those developed by the company Affec-
tiva, which include in-car systems and market research.
But scientists have argued that there are holes in Ekman’s the-
ories. A 2012 study published in the Proceedings of the National
Academy of Sciences USA, for instance, presented data showing
that facial expressions varied considerably by culture. And in 2019
Lisa Feldman Barrett, a psychologist at Northeastern University,

along with several colleagues, published a study that examined
more than 1,000 scientific papers on facial expressions. The notion
that faces revealed outward signs of common emotions had spread
to fields ranging from technology to law, they found—but there was
little hard evidence that it was true.
The basic emotions are broad stereotypical categories, Barrett
says. Moment to moment, facial expressions reflect complicated
internal states—a smile might cover up pain, or it might convey
sympathy. And today, she contends, it is almost impossible for an
AI system to consistently, reliably categorize those internal states
if it has been trained on data sets that are essentially collections of

labeled stereotypes. “It’s measuring something and then inferring
what it means psychologically,” Barrett says. “But those are two
separate things. I can’t say this about every company obviously, be-
cause I don’t know everything that everybody is doing. But the
emotion-recognition technology that’s been advertised is routine-
ly confounding these two things.”
One reason for this problem, Crawford says, is that the world of
tech start-ups is not aware of scientific debates in other fields, and
those start-ups are attracted to the elegant simplicity of systems
such as FACS. “Why has the machine-learning field been drawn to
Ekman?” Crawford asks. “It fits nicely with a machine-learning ca-
pacity. If you say there is a limited set of expressions and strictly lim-
ited numbers of potential emotions, then people will adopt that view
primarily because the theory fits what the tools can do.” In addition
to Ekman’s work and the personality-trait model of OCEAN, emo-
tion-AI companies have adopted other systems. One is a “wheel of
emotions” devised by the late psychologist Robert Plutchik, which
is used by Adoreboard, a U.K.-based company that analyzes emo-
tion in text. All these approaches offer to translate the complexity
of human affect into straightforward formulas. They may suffer
from similar flaws, too. One study found that OCEAN produces in-
consistent results across different cultures.
Nevertheless, researchers say emotion apps can work—if their
limitations are understood. Roboticist Ayanna Howard, dean of
the College of Engineering at the Ohio State University, uses a mod-
ified version of Micro soft’s facial-expression-recognition software
in robots to teach social behavior to children with autism. If a ro-
bot detects an “angry” expression from its interlocutor, for exam-
ple, its movements will adapt in ways that calm the situation. The
stereotypical facial expressions may not always mean exactly the
same thing, Howard says, but they are useful. “Yeah, we’re unique—
but we’re not that different from the person next door,” she says.
“And so when you’re talking about emotion in general, you can get

“There is a profound slippage


between what these things


show us and what might be


going on in somebody’s mind.”


—Kate Crawford


University of Southern California

Free download pdf