Scientific American - USA (2012-12)

(Antfer) #1
December 2021, ScientificAmerican.com 47

“Diagnosing Gender Bias in Image Recognition Systems,” by


Carsten Schwemmer et


al.,


in


Socius: Sociological Research for a Dynamic World,


Vol. 6. Published online November 11, 2020 https://doi.org/10.1177/2378023120967171 (

headshots with labels

); Wikipedia

(^ headshots

)

it right maybe not all the time but more than random right.”
In general, algorithms that scan and aggregate the reactions of
many people—such as those Zenus uses to read crowds—will be
more accurate, Barrett says, because “better than random” becomes
statistically meaningful with a large group. But assessing individ-
uals is more treacherous because anything short of 100 percent ac-
curacy ends up discriminating against certain people.
Many computer vision specialists are now embracing a more
agnostic view of facial expressions. (And more companies have be-
gun stating they do not directly map emotions or internal states.)
“As the field has developed, there’s increasing understanding that
many expressions have nothing to do with emotion,” says Jonathan
Gratch, a computer science professor at the University of Southern
California, who specializes in affective computing. “They’re kind
of tools we use to influence each other, or they’re almost like words
in a conversation, and so there’s meaning in those words. But it is
not direct access to what I’m feeling in the moment.”

Y


et as attempts to map and monetize emotional expressions,
personality traits and behaviors grow, they are expanding the
parts of our lives that can fall under surveillance. After 20 years
of tech companies mining personal data from online behavior, a
new, more intimate domain—faces and bodies and the signals they
send—is poised for similar treatment. “If you’re a Coca-Cola, and
you’re driving a campaign, and your principal methodology for mes-
saging is the Internet, you know everything about what audience
you reached,” says Jay Hutton, CEO of Vancouver-based company
VSBLTY, which markets smart cameras and software that scan
crowds, analyzing demographics and reactions to products for re-
tailers. “But what if we could take computer vision and turn bricks
and mortar into that same level of analytics?”
In December 2020 VSBLTY announced a partnership with Mex-
ican brewer Grupo Modelo to create in-store networks of cameras
to capture data in the beverage company’s 50,000 Modelorama
convenience stores and neighborhood bodegas in Mexico and oth-
er Latin American countries by 2027. Demand will exist wherever
there are screens and advertising, Hutton says. The technology
“will be used in transit hubs, or in an airport, or a stadium,” he says.
“Advertisers are paying millions of dollars to be a sponsor, and their
ads appear on screens throughout the stadium, [and] they are look-
ing for validation of that spin.”
This trend raises a basic legal and social question: Do the data
from your face and body belong to you? In most places around the
world, the answer is no—as long as your personal identity is kept
separate from that data. “If you would like to know, and somebody’s
in public, there seems to be no limit in scanning them for their
emotions,” says Jennifer Bard, a professor at the University of Cin-
cinnati College of Law, who has studied the issue.
Most emotion-AI companies that capture data in public say the
information is anonymized, and thus its collection should not pro-
voke concern. VSBLTY does not store facial images or other data
that can be linked to identities, Hutton says. Zenus’s Moutafis notes
that his company’s app does not upload the actual facial images
that its cameras capture—only the relevant metadata on mood and
position—and that it puts up signs and notices on meeting screens
that the monitoring is occurring. “Explicit consent is not needed,”
he says. “We always tell people deploying it that is a very good prac-
tice; when you have a surveillance sensitivity, you have to put up a
sign that these areas are being monitored.” Typically, Moutafis says,

people do not mind and forget about the cameras. But the diversi-
ty of applications means there are no common standards. It is also
far from clear whether people and politicians will embrace such
routine surveillance once it becomes a political and policy issue.
Ekman, who earlier worked with the company Emotient and
with Apple on emotion AI, now warns it poses a threat to privacy
and says companies should be legally obligated to obtain consent
from each person they scan. “Unfortunately, it is a technology that
can be used without people’s knowledge, and it’s being used on
them, and it’s not being used on them to make them happier,” he
says. “It’s being used on them to get them to buy products they
might not otherwise buy. And that’s probably the most benign of
the nonbenign uses of it.”
Emotion AI has entered personal spaces, too, where the poten-
tial hoard of behavioral data is even richer. Amazon’s Alexa analyz-
es users’ vocal intonation for signs of frustration to improve its al-
gorithms, according to a spokesperson. By 2023 some automakers
will be debuting AI-enabled in-cabin systems that will generate
huge amounts of data on driver and passenger behavior. Automak-
ers will want those data, also likely anonymized, for purposes such
as refining system responses and in-car design and for measuring
aggregated behavior such as driver performance. (Tesla already
collects data from multiple sources in its vehicles.) Customers
would likely have the option of activating various levels of these
systems, according to Modar Alaoui, CEO of emotion-AI company
Eyeris, so if occupants do not use certain functions, data would not
be collected on those. The in-cabin systems designed by Affectiva
(recently acquired by Swedish firm Smart Eye) do not record vid-
eo but would make metadata available, says chief marketing offi-
cer Gabi Zijderveld.

aLeix martinez, a computer vision scientist at Ohio State and Am-
azon and a co-author with Barrett of the 2019 paper criticizing the
face-emotion connection, has a photo he is fond of showing peo-
ple. It is of a man’s face that appears to be twisted in a mixture of
anger and fear. Then he shows the full image: it is a soccer player
exultant after scoring a goal. Facial expressions, gestures, and oth-
er signals are not only a product of the body and brain, he notes,
but of context, of what is happening around a person. So far that
has proved the biggest challenge for emotion AI: interpreting am-
biguous context. “Unless I know what soccer is, I’m never going to
be able to understand what happened there,” Martinez says. “So
that knowledge is fundamental, and we don’t have any AI system
right now that can do a good job at that at all.”
The technology becomes more effective, Martinez says, if the
task is narrow, the surroundings are simple, and the biometric in-
formation collected is diverse—voice, gestures, pulse, blood flow
under the skin, and so on. Coming generations of emotion AI may
combine exactly this kind of information. But that, in turn, will
only create more powerful and intrusive technologies that societ-
ies may not be prepared for.

FROM OUR ARCHIVES
What’s in a Face? Susana Martinez-Conde and Stephen L. Macknik; Scientific American Mind,
January/February 2012.
Facial Recognition—My Favorite Alternative to Password Login. David Pogue; August 2016.
Instruments of Bias. Claudia Wallis; June 2021.
scientificamerican.com/magazine/sa
Free download pdf