Scientific American - USA (2012-12)

(Antfer) #1
42 Scientific American, December 2021

Juan Mabromara, AFP and Getty Images

(^ preceding pages

)

John McQuaid is a journalist and author. He reported
this story while a fellow at the Woodrow Wilson
International Center for Scholars in Washington, D.C.
He is currently a Ph.D. student at the University
of Maryland Merrill College of Journalism.

I


n LiverpooL, engLand, at a February 2020 conFerence on the rather ungLamorous topic
of government purchasing, attendees circulated through exhibitor and vendor displays,
lingering at some, bypassing others. They were being closely watched. Around the floor,
24 discreetly positioned cameras tracked each person’s movements and cataloged sub-
tle contractions in individuals’ facial muscles at five to 10 frames per second as they
reacted to different displays. The images were fed to a computer network, where arti-
ficial-intelligence algorithms assessed each person’s gender and age group and ana-
lyzed their expressions for signs of “happiness” and “engagement.”

About a year after the Liverpool event, Panos Moutafis, CEO of
Austin, Tex.–based Zenus, the company behind the technology, was
still excited about the results. “I haven’t seen lots of commercial sys-
tems getting this level of accuracy,” he said to me during a video call,
showing me a photograph of the crowd, the faces outlined with box-
es. Zenus engineers had trained the system to recognize emotions
by having it examine a huge data set of facial expressions with la-
bels describing relevant feelings. The company validated the pro-
gram’s performance in various ways, including live tests when peo-
ple reported how they felt when an image was taken. The system,
Moutafis said, “works indoors, it works with masks, with no light-
ing, it works outdoors when people wear hats and sunglasses.”
The Zenus setup is one example of a new technology—called
emotion AI or affective computing—that combines cameras and
other devices with artificial-intelligence programs to capture fa-
cial expressions, body language, vocal intonation, and other cues.
The goal is to go beyond facial recognition and identification to re-
veal something previously invisible to technology: the inner feel-
ings, motivations and attitudes of the people in the images. “Cam-
eras have been dumb,” says A.C.L.U. senior policy analyst Jay Stan-
ley, author of the 2019 report The Dawn of Robot Surveillance. “Now
they’re getting smart. They are waking up. They are gaining the
ability not just to dumbly record what we do but to make judg-
ments about it.”
Emotion AI has become a popular market research tool—at an-
other trade show, Zenus told Hilton Hotels that a puppies-and-ice-
cream event the company staged was more engaging than the
event’s open bar—but its reach extends into areas where the stakes
are much higher. Systems that read cues of feeling, character and
intent are being used or tested to detect threats at border check-
points, evaluate job candidates, monitor classrooms for boredom
or disruption, and recognize signs of aggressive driving. Major

automakers are putting the technology into coming generations
of vehicles, and Amazon, Micro soft, Google and other tech compa-
nies offer cloud-based emotion-AI services, often bundled with fa-
cial recognition. Dozens of start-ups are rolling out applications to
help companies make hiring decisions. The practice has become
so common in South Korea, for instance, that job coaches often
make their clients practice going through AI interviews.
AI systems use various kinds of data to generate insights into
emotion and behavior. In addition to facial expressions, vocal in-
tonation, body language and gait, they can analyze the content of
spoken or written speech for affect and attitude. Some applications
use the data they collect to probe not for emotions but for related
insights, such as what kind of personality a person has and wheth-
er he or she is paying attention or poses a potential threat.
But critics warn that emotion AI’s reach exceeds its grasp in po-
tentially hazardous ways. AI algorithms can be trained on data sets
with embedded racial, ethnic and gender biases, which in turn can
prejudice their evaluations—against, for example, nonwhite job
applicants. “There’s this idea that we can off-load some of our cog-
nitive processes on these systems,” says Lauren Rhue, an informa-
tion systems scientist at the University of Maryland, who has stud-
ied racial bias in emotion AI. “That we can say, ‘Oh, this person has
a demeanor that’s threatening’ based on them. That’s where we’re
getting into a dangerous area.”
The underlying science is also in dispute. Many emotion-AI
apps trace their origins to research conducted half a century ago
by psychologists Paul Ekman and Wallace Friesen, who theorized
that a handful of facial expressions correspond to basic emotions
(anger, disgust, fear, happiness, sadness and surprise; Ekman lat-
er added contempt to the list) and that these expressions form a
universally understood emotional language. But these ideas are
now hotly debated. Scientists have found evidence of significant
Free download pdf