Financial Times Europe - 26.10.2019 - 27.10.2019

(Elliott) #1

18 ★ FTWeekend 26 October/27 October 2019


M


atteo Renzi, Italy’s
former prime minister
and founder of the new
ItaliaVivaparty,sitsinan
opulent-looking office,
facetothecamera.Anoilpaintinghangs
to one side of him, on the other sits a
Renaissancebust.
A technician ducks into shot to check
his sound levels, and then Renzi is off.
He starts by gurning at an off-screen
audience member, greeting them in a
hoarsestagewhisper.
Then he turns on his fellow politi-
cians. Giuseppe Conte, the current
prime minister; Luigi Di Maio, his dep-
uty; Carlo Calenda, a member of the
European Parliament — all receive the
same obscene arm gesture, punctuated
withalittlesneer.
This performancesent some Italians
straight to Twitter to voice their outrage
at the ex-prime minister’s diatribe. But
it was not the real Renzi talking. On
closer examination, the voice is differ-
ent, as are the gesticulations. Even the
facelooksuncannilysmooth.
That’s because the politician’s fea-
tures have been algorithmically trans-
planted on to a comedian’s, as part of a
skit forStriscia la notizia, a long-running
Italian satire show. The video is the lat-
est in a series of examples of how “deep-
fake” technology — or AI-generated vid-
eos designed to fool humans — has
startedtoaffectpolitics.
Only a few years ago, such “deep-
fakes”wereanovelty,createdbyhobby-
ist coders. Today, they are increasingly
commodified as yet another service
available to those with even a little dis-
posablecash.
While they may be increasingly cheap
to pull off, their repercussions could be
far-reaching. Fraudulent clips of busi-
ness leaders could tank companies.
False audio of central bankers could
swing markets. Small businesses and
individuals could face crippling reputa-
tionalorfinancialrisk.
And, as elections approach in the US,
UK and elsewhere, deepfakes could
raise the stakes once more in the elec-

torate’sstruggletoknowthetruth.
Around the world, start-ups, academ-
ics and lawmakers are rushing to create
tools to mitigate these risks. But the
technology itself is developing faster
thananyoneimagined.
Hany Farid, a professor at the Univer-
sity of California, Berkeley, has spent
decades studying digital manipulation:
“In January 2019, deepfakes
were... buggy and flickery. Nine
months later, I’ve never seen anything
likehowfastthey’regoing.Thisisthetip
oftheiceberg.”
On one thing, experts are clear. The
mere risk of a deepfake undermines a
most basic principle of humanity: can
youbelieveyoureyes?

Disinformation is as old as politics,
and its practitioners have kept pace
withtechnologicalchanges.Wherewrit-
ten f ake newswas the hallmark of the
most recent election cycle in the US and
UK, images and videos are increasingly
thenewfocusofpropaganda,saysVidya
Narayanan, a researcher at the Oxford
InternetInstitute.
“[They] are powerful in shaping a sit-
uation. If you see an image, it is very
immediate.” Software such as Pho-
toshop was used to create a widely
shared fake image ofEmma González, a
survivor of the Parkland shooting and a
gun control activist, ripping up the US
Constitutionin2018.
Altered videos are not exactly new.
The most famous recent example is an
edit of a slowedNancy Pelosispeech
from earlier this year. The video spread
across conservative media as critics of
the Speaker of the House of Representa-
tives declared it evidence of her senility,
alcoholismoramentalhealthproblem.

Rudy Giuliani, US president Donald
Trump’s personal lawyer, retweeted the
video, before deleting it but defending
his choice. Trump himself posteda dif-
ferent altered videoof Pelosi, which is
still online: it has nearly 30,000
retweets and more than 90,000 likes.
The difference with a deepfake is that
with an algorithm in charge, the results
canbemuchmoreconvincing.
The technology that powers deep-
fakes, known as Generative Adversarial
Networks, was only invented in 2014.
GANsaremadeupoftworivalcomputer
networks.Asynthesisercreatescontent,
which the detector or discriminator
compareswithimagesoftherealthing.
“Let’s say the synthesiser places
someone’s face on to someone [else]’s
face,” says Farid. “The detector says
there’s an artefact [a distortion in the
image], do it again.” Through hundreds
of thousands of cycles of trial and error,
the two systems can create immensely
lifelikevideos.
This has been the year that saw deep-
fakes move beyond the hands of those
with powerful computers, graphics
cards and at least some technical exper-
tise. TheDeepNude appwas released in
JuneandtheZAO ppinAugust.a
The former, now shut down, pro-
duced realistic female nudes from
clothed photographs, leading to under-
standable outrage. The latter allowed
users to plaster their faces over the pro-
tagonists of a selection of movies simply
by uploading a few seconds of video to
the free Chinese app. “These things are
absolutelygettingdemocratised,andit’s
happeningreallyrapidly,”Faridsays.
He is not alone in expressing shock at
the rate of development from academic
concept to easily accessible reality. “We

prising that a market has started to
emerge. A Japanese start-up called
DeepfakesWebischarging$2anhourof
processing time to create videos. On
Fiverr, an online marketplace connect-
ing freelancers with jobs, a user with the
name Derpfakes offers to put custom-
ers’facesintomovieclips.
Face swapping may be the most com-
mon form of deepfake, but others are
more ambitious. Ricky Wong, one of the
co-founders of a start-up called Humen,
explainsthatwiththreeminutesoffoot-
age of movement and material from
professionals, his company can make
anyone “dance”. “We’re trying to bring
delight and fun to people’s lives,” he
says. “Not something like a Nazi salute,
thatwouldbehorrible.”
Meanwhile, audio deepfakes are also
on the rise. Modulate, a start-up based
inBoston,iscreating“audioskins”,real-
time voice changers for use in video
games. “There’s a lot of people who
spend a lot of time and money building
up their persona in games,” says Mike
Pappas, the company’s co-founder and
chiefexecutive.
“Your normal voice breaks that illu-
sion that you’ve spent so much time
crafting.” Part way through our phone
conversation, Pappas changes to a
woman’s voice, and then to a co-
worker’s: it comes across as a little stiff
butstillrecognisablyhuman.
Pappas acknowledges the risks of
impersonation. In August,The Wall
Street Journal reported n one of theo
first known cases of synthetic media
becoming part of a classic identity fraud
scheme: scammers are believed to have
used commercially available voice-
changing technology to pose as a chief
executiveinordertoswindlefunds.

As services such as Modulate grow,
the number of legal cases is likely to go
up. Pappas says Modulate screens
requests to avoid impersonation.
“We’ve landed on the fact that it’s
important to be able to sleep at night,”
he says. The company also places a dig-
ital watermark on its audio to reduce
the risk of a voice skin being recognised
fortherealthing.

As Henry Ajder walks through the
nearly 600-year-old grounds of Queens’
College, Cambridge, he describes a daily
routine that involves tracking the crea-
tion and spread of deepfake videos into
thedarkestcornersoftheinternet.
Ajder’s job as head of communica-
tions and research analysis at start-up
Deeptrace has led to him investigating
everything from fake pornography to
politics. In a eport Deeptrace releasedr
last month, the scale of the problem was
laid bare: the start-up found nearly
15,000 deepfakes online over the past
seven months. Of these, 96 per cent
werepornographic.
OneofAjder’spoliticalcaseslookedat
whether or not a deepfake may have
contributed to an attempted coup in
Gabon. Ali Bongo Ondimba, the presi-
dent of the African nation, was taken ill
in October last year and has been in
Morocco since then, with little informa-
tion released on his health. Then, in
December,a surprise video f him waso
released, prompting speculation from
politicalopponents.
“It just looked odd: the eyes didn’t
move properly, the head didn’t move in
a natural way — the immediate kind of
response was that this is a deepfake,”

Continuedonpage 19

AI-generated videos are


fooling the public. What is


society doing to stay one


step ahead? BySiddharth


Venkataramakrishnan


Below: in May, President
Trump retweeted this
video of Nancy Pelosi,
Speaker of the House of
Representatives, altered
to supposedly provide
evidence of health
problems

knew it was coming, but not nearly this
fast,” says David Doermann, a professor
attheUniversityofBuffalo.
Like Farid, Doermann has been in the
field of computer vision and image
processing for more than two decades,
and is an adviser for video-verification
app Amber. “It’s hard to predict where
[deepfakes will] go in the next five
years, given they’ve only been around
forfiveyears.”
Making a deepfake from scratch is
increasingly simple. Making a good
deepfake, however, is another matter.
The more powerful the computer and
the graphics card, the more cycles a
GAN can run through and the better the
results. On top of that, many of the best-
looking deepfakes have been profes-
sionallytouchedupafterwards.
Given these limitations, it is unsur-

‘In January deepfakes were


buggy and flickery. Nine
months later I’ve never

seen anything like how fast
they’re going. This is the tip

of the iceberg’


Hany Farid, professor,
University of California,
Berkeley

Can you believe


your eyes?


How deepfakes are


coming for politics


Getty Images

OCTOBER 26 2019 Section:Weekend Time: 25/10/2019- 15:47 User:adrian.justins Page Name:WIN18, Part,Page,Edition:WIN, 18, 1

Free download pdf