Scientific American Mind - USA (2020-03 & 2020-04)

(Antfer) #1

telligent, analytical and comfortable with ambiguity
find statements more believable simply because
we have heard them repeated.
This phenomenon, known as the illusory truth
effect, was first documented in the 1970s, but it
is more relevant than ever in the era of fake
news. One might immediately think of Donald
Trump, who is a prolific peddler of this type of
untruth. The Washington Post recently reported
that there are “more than 350 instances in which
[Trump] has repeated a variation of the same
claim at least three times.” In fact, Trump has
repeated some false claims more than 200
times—for example, his claim that his border wall
is being built. Of course, there’s nothing new
about this type of huckster’s grift. But online
environments supercharge it. They give repeated
false claims instant global distribution. More
important, they allow the person making false
claims to go on doing so while dodging the pres-
sure (and potential legal repercussions) that ac-
company similar claims in public or in traditional
news sources.
Psychologists say that what makes repeated
claims seem truer is their “fluency.” Fluency
means the cognitive ease with which we process
a claim. Repeated claims are easier to represent
and comprehend. For that reason, they just feel
good. Our minds take this feeling as a cue that
the claim is true.
In a recent review of the research, Nadia M.
Brashier and Elizabeth J. Marsh identify two addi-
tional ways disinformation hacks our truth judg-
ments. One that is closely related to fluency and


the good feelings it generates is memory. The
information and experiences stored in our memo-
ry are powerful weapons in the fight for truth.
But, as with fluency, we take our memories as
cues, not as the raw materials for forming well-
considered judgments. We tend, in other words,
to go with “good enough.” We often accept claims
as true when they only partially fit with what we
know or remember.
Additionally, we can fall prey to the illusion
of explanatory depth, a tendency to overestimate
our knowledge and understanding of the issues
we care about. Research shows that when we do,
we are more likely to hold extreme beliefs and to
accept fake news as true.
Unfortunately, digital tools may be making our
memories even weaker and less effective for
judging truth. As Brashier and Marsh point out,
“search algorithms return content based on key-
words, not truth. If you search ‘flat Earth,’ for ex-
ample, Google dutifully returns photoshopped
pictures for a 150-foot wall of ice that keeps us
from slipping off the planet.” For this reason, rely-
ing on the Internet as truth-on-demand rather
than looking to our memories and acquired
knowledge can backfire in serious ways.
Brashier and Marsh also point out a more
basic mismatch between our brains and the digi-
tal environment: We tend to make truth our de-
fault judgment. This is especially true for visual
information. As with the other cues we use to
form truth judgments, this is a handy and useful
adaptation in other contexts. After all, humans
lived for millennia in an environment where we

could trust most of our senses most of the time.
Now, however, we find ourselves in a new infor-
mation ecosystem, one in which, according to
some sources, we will soon consume more false
media than true media. When it comes to coping
with that magnitude of misinformation, our brains
are simply not well equipped.
Is there anything we can we do to keep our
guard up in the post-truth era? We know that sim-
ply fact-checking claims is not enough. After all,
Welch’s “pedo ring” conspiracy theory had been
debunked long before he showed up armed at
Comet Ping Pong’s door.
There are, however, causes for hope. Once
we recognize our vulnerabilities, we can recognize
many other ways to design our information con-
sumption with them in mind. Along with Emmaline
Drew Eliseev, Brashier and Marsh found they
could wipe out the illusory truth effect by simply
prompting study participates to behave like
fact checkers.
One of the most interesting solutions may be
a collaborative one. Ziv Epstein, Gordon Penny-
cook and David G. Rand have found that crowd-
sourced judgments about the trustworthiness
of news sources can be surprisingly accurate.
They suggest allowing users of social media to
train algorithms to spot fake news as a scalable,
decentralized solution. After ignoring warnings
from friends and trying unsuccessfully to recruit
them, Edgar Welch went it alone. Perhaps if
we come together to protect against the vulnera-
bilities we all share, no one else will make the
same mistake.

OPINION

Free download pdf