ScAm - 09.2019

(vip2019) #1

92 Scientific American, September 2019


Bad actors know this: In 2018 media scholar Whit-
ney Phillips published a report for the Data  & Society
Research Institute that explores how those attempting
to push false and misleading narratives use techniques
to encourage reporters to cover their narratives. Yet an-
other recent report from the Institute for the Future
found that only 15  percent of U.S. journalists had been
trained in how to report on misinformation more re-
sponsibly. A central challenge now for reporters and
fact checkers—and anyone with substantial reach,
such as politicians and influencers—is how to untangle
and debunk falsehoods such as the Pelosi video with-
out giving the initial piece of content more oxygen.

MEMES: A MISINFORMATION POWERHOUSE
IN JANUARY 2017 the NPR radio show This American Life
interviewed a handful of Trump supporters at one of
his inaugural events called the Deplora Ball. These
people had been heavily involved in using social media
to advocate for the president. Of Trump’s surprising
ascendance, one of the interviewees explained: “We

memed him into power.... We directed the culture.”
The word “meme” was first used by theorist Rich-
ard Dawkins in his 1976 book, The Selfish Gene, to de-
scribe “a unit of cultural transmission or a unit of imi-
tation,” an idea, behavior or style that spreads quickly
throughout a culture. During the past several decades
the word has been appropriated to describe a type of
online content that is usually visual and takes on a
particular aesthetic design, combining colorful, strik-
ing images with block text. It often refers to other
cultural and media events, sometimes explicitly but
mostly implicitly.
This characteristic of implicit logic—a nod and wink
to shared knowledge about an event or person—is what
makes memes impactful. En thy memes are rhetorical
devices where the argument is made through the ab-
sence of the premise or conclusion. Often key referenc-
es (a recent news event, a statement by a political figure,
an advertising campaign or a wider cultural trend) are
not spelled out, forcing the viewer to connect the dots.
This extra work required of the viewer is a persuasive
technique because it pulls an individual into the feeling
of being connected to others. If the meme is poking fun
or invoking outrage at the expense of another group,
those associations are reinforced even further.
The seemingly playful nature of these visual formats

means that memes have not been acknowledged by
much of the research and policy community as influen-
tial vehicles for disinformation, conspiracy or hate. Yet
the most effective misinformation is that which will be
shared, and memes tend to be much more shareable
than text. The entire narrative is visible in your feed;
there is no need to click on a link. A 2019 book by An
Xiao Mina, Memes to Movements, outlines how memes
are changing social protests and power dynamics, but
this type of serious examination is relatively rare.
Indeed, of the Russian-created posts and ads on
Facebook related to the 2016 election, many were
memes. They focused on polarizing candidates such as
Bernie Sanders, Hillary Clinton or Donald Trump and
on polarizing policies such as gun rights and immigra-
tion. Russian efforts often targeted groups based on
race or religion, such as Black Lives Matter or Evangel-
ical Christians. When the Facebook archive of Rus-
sian-generated memes was released, some of the com-
mentary at the time centered on the lack of sophis-
tication of the memes and their impact. But research
has shown that when people are fear-
ful, oversimplified narratives, con-
spiratorial explanation, and messages
that demonize others become far
more effective. These memes did just
enough to drive people to click the
share button.
Technology platforms such as Face-
book, Insta gram, Twitter and Pin-
terest play a significant role in encour-
aging this human behavior because
they are designed to be performative
in nature. Slowing down to check whether content is
true before sharing it is far less compelling than rein-
forcing to your “audience” on these platforms that you
love or hate a certain policy. The business model for so
many of these platforms is attached to this identity per-
formance because it encourages you to spend more
time on their sites.
Researchers are now building monitoring technol-
ogies to track memes across different social platforms.
But they can investigate only what they can access,
and the data from visual posts on many social plat-
forms are not made available to researchers. Addition-
ally, techniques for studying text such as natural-lan-
guage processing are far more advanced than tech-
niques for studying images or videos. That means the
research behind solutions being rolled out is dispro-
portionately skewed toward text-based tweets, Web
sites or articles published via URLs and fact-checking
of claims by politicians in speeches.
Although plenty of blame has been placed on the
technology companies—and for legitimate reasons—
they are also products of the commercial context in
which they operate. No algorithmic tweak, update to
the platforms’ content-moderation guidelines or regu-
latory fine will alone improve our information ecosys-
tem at the level required.

Of Trump’s surprising ascendance,


one of the DeploraBall interviewees


explained: “We memed him into


power.... We directed the culture.”

Free download pdf