Adweek - 06.04.2020

(Jacob Rumans) #1

4 APRIL 6, 2020 |^ ADWEEK


®
4


As the coronavirus spreads
around the world, it leaves a trail of
misinformation in its wake. On social
media, where the algorithms reward
engaging content regardless of its
veracity, platforms like Facebook,
Twitter and YouTube carry the hefty
burden of cleaning up harmful rumors
from innocent users, bogus claims
from profiteers and disinformation
campaigns from bad actors.
But if you’ve logged onto a social
platform recently, you’ve probably
seen something different. Platforms
are giving banners, landing pages
and free ads to the World Health
Organization, the Centers for Disease
Control and Prevention and other
health agencies around the world. The
idea is that, while cleaning up false or
misleading content is vital, promoting
authoritative information is helpful for
users who don’t know where to turn.
The CDC did not respond to a
request for comment.
“As the outbreak is evolving and
knowledge is evolving, we are trying
to update people with new information
and we are trying to expand our
presence to more platforms,” said
WHO social media officer Aleksandra
Kuzmanovic. “The main strategy is to
reach as many people with reliable
and accurate information.” 
Kuzmanovic said the organization
has active partnerships with Facebook,
Google, LinkedIn, Pinterest, Snapchat,
Tencent and TikTok, but managing
all of this is no easy task. Since the
outbreak started a few months ago,


WHO has grown its social media team
from two people to six just to support
COVID-19 response efforts. 
In addition to urging social distancing,
hand-washing and other best practices,
Kuzmanovic said her team tracks
falsehoods spreading online in order to
dispel them, like a rumor that said garlic
could prevent COVID-19 infection.
This level of collaboration
between platforms and public health
authorities is “unprecedented,”
according to Cuihua (Cindy) Shen, an
associate professor of communication
at the University of California,
Davis, precisely because one of the
biggest challenges is combating
misinformation around COVID-19. 
“That means true information
could become misinformation after a
few days or a few weeks,” Shen said.
“And vice versa: Misinformation could
become true information.”
On Jan. 14, for example, the WHO
tweeted, “Preliminary investigations
conducted by the Chinese authorities
have found no clear evidence of
human-to-human transmission of
the novel #coronavirus (2019-nCoV)
identified in #Wuhan, #China.”
Only 10 days later, China put 36
million citizens on lockdown. In the United
States, Sen. Rick Scott, R-Fla., called
for a congressional inquiry into WHO
for “willfully parrot[ing] propaganda”
from the Chinese government.
Misinformation like this may leave
a lasting mark, even if the authority
changes its tune over time.
“Even if a piece of information is

found to be false, that information
lingers,” Shen said. “That original
impression doesn’t erase itself just
because you’re exposed to fact-
checking information.”
But does promoting authoritative
information make up for the flurry of
misinformation on the platform? That’s
one question that the University of
Washington’s Center for an Informed
Public—namely the center’s director
Jevin West, computer science professor
Franziska Roesner and Ph.D. student

Christine Geeng—are looking into. 
“What this crisis has done is
demonstrated that all these big tech
companies can do something,” West
said. “There’s clearly more that they can
do, but they’re certainly doing more than
they’ve done in prior discussions about
misinformation and disinformation.”
Facebook and Instagram have
“directed more than 1 billion people
to resources from health authorities
including the WHO, more than 100
million of whom clicked through to
learn more,” company spokesperson
Andrea Vallone said. In a blog post,
Nick Clegg, vp of global affairs
and communications, said that the
coronavirus resource page, which is
only active in a few countries right
now, will be available globally soon.
The University of Washington
researchers conducted a survey of
Facebook users to see if banners,
fact-checked labels and promoted
posts changed user behavior and
perceptions of misinformation. 
“One thing we found is that more
post-specific, the ‘This is false’ labels
are more effective in the moment,”
Roesner said. “People rank them as
more helpful and they change their
mind because of them.”
But what we don’t know enough
about yet, she said, are the peripheral
effects of these platform features.
“People say the banners aren’t
helpful and didn’t click on them, but
that doesn’t assess whether that’s
more subtly changing how they’re
consuming information,” she said.

AMID THE CORONAVIRUS PANDEMIC, PLATFORMS ARE TRYING TO REDUCE MISINFORMATION. BY SCOTT NOVER


CDC AND WHO SWEEP SOCIAL MEDIA


IL

LU

ST

RA

TI

ON

:^ C

HE

FB

OY

RG

;^ F

IL

O/

GE

TT

Y^

IM

AG

ES

SCOTT NOVER IS A PLATFORMS
REPORTER AT ADWEEK, COVERING
SOCIAL MEDIA COMPANIES AND
THEIR INFLUENCE. @SCOTTNOVER

FACEBOOK BANNER DIRECTS


USERS TO HARD FACTS
On Facebook, a fixed banner leads
users to a landing page full of updates
from Johns Hopkins University and the
CDC. “Learn how you can stay healthy
and prevent the spread of novel
coronavirus,” Facebook prompts users
when they search for coronavirus-
related keywords. The researchers’
preliminary, non-peer reviewed
results found that only 13.3% of 313
participants who saw the Facebook
banner reported that they clicked on it.
By contrast, 79.5% said they’ve seen
misinformation about COVID-19 and
33.9% reported having believed false
information themselves.
Free download pdf