Scientific American 201907

(Rick Simeone) #1
86 Scientific American, July 2019

Zeynep Tufekci is an associate professor at the University
of North Carolina School of Information and Library Science
and a regular contributor to the New York Times. Her book,
Twitter and Tear Gas: The Power and Fragility of Networked Protest,
was published by Yale University Press in 2017.

THE INTERSECTION
WHERE SCIENCE AND SOCIETY MEET

Illustration by Cornelia Li

“Emotional AI”


Sounds Appealing


But its consequences could be troubling


By Zeynep Tufekci

Perhaps you’re familiar with Data from Star Trek: The Next
Generation, an android endowed with advanced artificial intel-
ligence but no feelings—he’s incapable of feeling joy or sadness.
Yet Data aspires to more. He wants to be a person! So his creator
embarks on a multiseason quest to develop the “emotion chip”
that would fulfill that dream.
As you watch the show, it’s hard not to wonder about the end
point of this quest. What would Data do first? Comfort a griev-
ing person? Share a fellow crewmate’s joy? Laugh at a joke?
Make a joke? Machine learning has already produced software
that can process human emotions, reading micro expressions bet-
ter than humans can and generally cataloguing what may be
going on inside a person just from scanning his or her face.
And right out of the gate, advertisers and marketers have
jumped on this technology. For example, Coca-Cola has hired a
company called Affectiva, which markets emotion-recognition
software, to fine-tune ads. As usual, money is driving this not so
noble quest: research shows that ads that trigger strong emo-
tional reactions are better at getting us to spend than ads using
rational or informational approaches. Emotional recognition can

also be used in principle for pricing and marketing in ways that
just couldn’t be done before. As you stand before that vending
machine, how thirsty do you look? Prices may change according-
ly. Hungry? Hot dogs may get more expensive.
This technology will almost certainly be used along with
facial-recognition algorithms. As you step into a store, cameras
could capture your countenance, identify you and pull up your
data. The salesperson might get discreet tips on how to get you
to purchase that sweater—Appeal to your ego? Capitalize on your
insecurities? Offer accessories and matching pieces?—while cou-
pons customized to lure you start flashing on your phone. Do the
databases know you have a job interview tomorrow? Okay, here’s
a coupon for that blazer or tie. Are you flagged as someone who
shops but doesn’t buy or has limited finances? You may be
ignored or even tailed suspiciously.
One potential, and almost inevitable, use of emotion-recogni-
tion software will be to identify people who have “undesirable”
behaviors. As usual, the first applications will likely be about secu-
rity. At a recent Taylor Swift concert, for example, facial recognition
was reportedly used to try to spot potential troublemakers. The
software is already being deployed in U.S. airports, and it’s a mat-
ter of time before it may start doing more than identifying known
security risks or stalkers. Who’s too nervous? Who’s acting guilty?
In more authoritarian countries, this software may turn to
identifying malcontents. In China, an app pushed by the Com-
munist party has more than 100 million registered users—the
most downloaded app in Apple’s digital store in the nation. In a
country already known for digital surveillance and a “social
credit system” that rewards and punishes based on behavior the
party favors or frowns on, it’s not surprising that so many peo-
ple have downloaded an app that the New York Times describes
as “devoted to promoting President Xi Jinping.” Soon people in
China may not even be able to roll their eyes while they use the
app: the phone’s camera could gauge their vivacity and happi-
ness as they read Xi’s latest quotes, then deduct points for those
who appear less than fully enthusiastic.
It’s not just China: the European Union is piloting a sort of
“virtual agent” at its borders that will use what some have called
an “AI lie detector.” Similar systems are being deployed by the U.S.
government. How long before companies start measuring wheth-
er customer service agents are smiling enough? It may seem like
a giant leap from selling soda to enforcing emotional compliance,
and there can certainly be some positive uses for these technolo-
gies. But the people pushing them tend to accentuate the positive
and downplay the potential downside. Remember Facebook’s
feel-good early days?
If Data had ever been able to feel human emotions, he might
have been surprised by how important greed and power are in
human societies—and “emotional AI,” unless properly regulated,
could be a key tool for social control. That should give us all
unhappy faces.

JOIN THE CONVERSATION ONLINE
Visit Scientific American on Facebook and Twitter
or send a letter to the editor: [email protected]
Free download pdf