Scientific American - USA (2020-12)

(Antfer) #1

58 Scientific American, December 2020 Graphic by Filippo Menczer


unteers, who were non-Native, to recall the rather
confusing story at increasing intervals, from minutes
to years later. He found that as time passed, the
rememberers tended to distort the tale’s culturally
unfamiliar parts such that they were either lost to
memory or transformed into more familiar things. We
now know that our minds do this all the time: they
adjust our understanding of new information so that
it fits in with what we already know. One consequence
of this so-called confirmation bias is that people often
seek out, recall and understand information that best
confirms what they already believe.
This tendency is extremely difficult to correct.
Experiments consistently show that even when peo-
ple encounter balanced information containing views
from differing perspectives, they tend to find support-
ing evidence for what they already believe. And when
people with divergent beliefs about emotionally
charged issues such as climate change are shown the
same information on these topics, they become even
more committed to their original positions.
Making matters worse, search engines and social
media platforms provide personalized recommenda-
tions based on the vast amounts of data they have
about users’ past preferences. They prioritize infor-
mation in our feeds that we are most likely to agree
with—no matter how fringe—and shield us from
information that might change our minds. This
makes us easy targets for polarization. Nir Grinberg

and his co-workers at Northeastern University recent-
ly showed that conservatives in the U.S. are more
receptive to misinformation. But our own analysis of
consumption of low-quality information on Twitter
shows that the vulnerability applies to both sides of
the political spectrum, and no one can fully avoid it.
Even our ability to detect online manipulation is
affected by our political bias, though not symmetri-
cally: Republican users are more likely to mistake
bots promoting conservative ideas for humans,
whereas Democrats are more likely to mistake con-
servative human users for bots.

SOCIAL HERDING
in new york city in August 2019 , people began running
away from what sounded like gunshots. Others fol-
lowed, some shouting, “Shooter!” Only later did they
learn that the blasts came from a backfiring motor-
cycle. In such a situation, it may pay to run first and
ask questions later. In the absence of clear signals, our
brains use information about the crowd to infer
appropriate actions, similar to the behavior of school-
ing fish and flocking birds.
Such social conformity is pervasive. In a fascinat-
ing 2006 study involving 14,000 Web-based volun-
teers, Matthew Salganik, then at Columbia Universi-
ty, and his colleagues found that when people can see
what music others are downloading, they end up
downloading similar songs. Moreover, when people

Low Level of Bot Infiltration High
When bot infiltration is low, overall quality
of shared information is high

When bot infiltration is high, overall quality
of shared information is low

Yellow circles are
bots (automated
accounts)

Each circle represents
a social media account

Pink circles
are authentic
accounts

Circle tint represents
quality of shared information

Circle size represents
influence (number
of authentic followers)

Low

High

Low

High

Lines and circle proximity
represent connections
between accounts

Pollution by Bots


Bots, or automated accounts that impersonate human users,
greatly reduce the quality of information in a social network.
In one computer simulation, OSoMe researchers included bots
(modeled as agents that tweet only memes of zero quality and
retweet only one another) in the social network. They found that

when less than 1 percent of human users follow bots, informa-
tion quality is high ( left ). But when the percentage of bot infiltra-
tion exceeds 1, poor-quality information propagates throughout
the network ( right ). In real social networks, just a few early
upvotes by bots can make a fake news item become viral.
Free download pdf