New Scientist - USA (2019-11-16)

(Antfer) #1

24 | New Scientist | 16 November 2019


P

UNDITS in the US are
arguing over a technology
that is used almost
exclusively for elections and
pornography. I am referring to
deepfakes, videos manipulated
with simple apps to swap out
faces, distort words and make it
look like politicians are starring
in hot XXX movies. The fate
of deepfakes could change the
course of democracy. And that
feels very on-brand for the US
right now.
Technologists first warned
about the power of machine
learning to create convincing
doctored videos back in 2017.
Some deepfakes are so well done
that it is impossible to distinguish
them from legitimate footage.
What if political operatives
created a video making it appear
that their opponents were doing
something illegal or worse? After
all, President Donald Trump
has already assisted an attempt
to undermine the credibility of
Californian Democrat and House
of Representatives speaker Nancy
Pelosi by tweeting a deepfake
of a speech where her voice was
distorted to make it seem like
she was drunkenly slurring her
words. Fearing more scenarios
like this, California passed a law
last month that will forbid the
use of deepfakes in the 2020
presidential election.
Politicians are voicing
legitimate concerns, but they
are worried about the wrong
targets. Dutch cybersecurity
firm Deeptrace released a report
last month showing that nearly
96 per cent of deepfakes are
revenge porn, videos where a
victim’s face has been swapped
onto a porn star’s body. We have
yet to see the expected avalanche
of deepfake political propaganda.
In fact, bracing for the onslaught
of such deepfakes has distracted

us from the real fake menace:
targeted political ads on social
media platforms.
Concerns over these ads
reached a fever pitch last month,
when US presidential candidate
Elizabeth Warren decided to
take Facebook founder Mark
Zuckerberg to task. Her campaign
bought Facebook ads that said:
“Breaking news: Mark Zuckerberg
and Facebook just endorsed
Donald Trump for re-election.”
She had created a fake deepfake –
albeit without video – to make
the point that Facebook would
run literally any lie from a
politician as an ad.

Responding to Warren’s stunt
at a Congressional hearing,
Zuckerberg said Facebook has
no plans to fact-check political
adverts because that would be
“censorship”. He added: “I just
think that in a democracy
people should be able to see for
themselves what politicians
are saying.”
That sounds pretty reasonable
until you get to the bit about
people seeing “for themselves”.
Because that is exactly what
Facebook won’t allow. Most
people will never see the vast
majority of political ads on
the social media platform.
As we learned from the
Cambridge Analytica scandal,
Facebook gathers a tremendous
amount of personal data on
its users. Its “micro-target”
marketing tools offer political
campaigns a chance to deliver
ads to audiences based on

personality traits and biases.
Micro-targeting allows political
lies on Facebook to reach only the
people most likely to fall for them.
In a healthy democracy, it would
be perfectly fine for a politician to
spout as many lies as they wanted.
The whole citizenry could mull
their words over, and voters could
alert each other to falsehoods
or distortions. We could have
a national debate about our
representatives’ credibility.
But Facebook has destroyed
the public sphere where such a
debate might take place. Instead,
a politician can craft one set of
lies for urban voters and a totally
different set for rural ones. Or
they can spew anti-immigrant
propaganda to white Facebook
without fear that watchdog
groups will see it.
Put simply, the problem
isn’t that politicians can lie on
Facebook. It is that Facebook’s
micro-targeting prevents liars
from getting caught. That is why
former Facebook chief security
officer Alex Stamos has been
calling for the firm to stop
allowing micro-targeted ads.
He is joined by Siva
Vaidhyanathan, a media studies
professor at the University of
Virginia and author of the book
Anti-Social Media. He argues that
Facebook’s business model, which
is entirely based on mining data
and selling targeted ads, is
“undermining our democracies”.
Deepfakes are undeniably
a menace. But it is unrealistic
to imagine we can legislate
away the basic human urge to
lie. What we can do is to make
it harder for those lies to fester
unchecked, fostering extremism
and conspiracy theories. Political
messages should be addressed to
the entire electorate, otherwise we
risk fragmenting our democracies
into vulnerable micro-targets. ❚

This column appears
monthly. Up next week:
James Wong

“ On Facebook, a
politician can craft
one set of lies for
urban voters and
a totally different
set for rural ones”

The real fake menace Doctored videos may change democracy
one day, but right now we have much more to fear from Facebook,
writes Annalee Newitz

This changes everything


What I’m reading
Biologist Thomas
Seeley’s fascinating book
Honeybee Democracy,
because apparently
insects do it better
than humans.

What I’m watching
The 2008 TV series
Terminator: The Sarah
Connor Chronicles,
the only worthy sequel
to Terminator 2.

What I’m working on
An article about what the
future will be like after
social media dies.

Annalee’s week


Annalee Newitz is a science
journalist and author. Their
novel Autonomous won
the Lambda Literary Award
and they are the co-host
of the Hugo-nominated
podcast Our Opinions
Are Correct. You can
follow them @annaleen
and their website is
techsploitation.com

Views Columnist

Free download pdf