Financial Times Europe - 26.10.2019 - 27.10.2019

(Elliott) #1
26 October/27 October 2019 ★ FTWeekend 19

Spectrum


says Ajder. A week after the video was
released, junior officers attempted a
coupd’état,whichwasquicklycrushed.
In fact, Deeptrace did not find evi-
dence of manipulation but, for Ajder,
that may be irrelevant. What was
important was the uncertainty created.
“Even before these videos become very
goodorverywidespread,wearealready
seeing the spectre of deepfakes haunt-
ing people,” he says. “It really drives
how powerful the mere doubt
is... about any videos we already want
tobefake.”
Kanishk Karan, a researcher at the
Digital Forensic Research Lab, part of
the US think-tank Atlantic Council,
points to another potential deepfake,
this time in Malaysia: a video alleging to
show economic affairs minister zminA
Ali in a trystwith another minister’s
maleaide.
Given Malaysia’s colonial-era laws
and persistent discrimination against
LGBT communities, the footage,
released in June, naturally provoked
controversy.“Alotofpeopleweresaying
it’s a deepfake version of him,” says
Karan. “On the other side, the opposi-
tion is saying that it’s not a deepfake, it’s
a real confession.” To date, the scandal
hasnottoppledAli.
Deepfakes may be particularly
destructive in countries such as India or
Brazil. In both, there is heavy use of
WhatsApp, a platform that lends itself
to videos and images and whose closed
nature also comes with a sense of secu-
rityandtrust.
Bothcountrieshavelargepopulations
without basic literacy, Narayanan of the
Oxford Internet Institute points out,
making it difficult to generate media lit-
eracy. As is often the case with disinfor-
mation, vulnerable populations are
mostatrisk.
“The internet represents the peak of
the fourth industrial revolution. These
are communities that haven’t reaped
thebenefitsofthefirstone—theydonot
have the knowhow to begin to under-
stand that a computer can create this,”
shesays.

Those creating the technology to fight
deepfakes on the ground are divided
into two broad camps. The first is detec-
tion, identifying fake videos and images
as they emerge. Deeptrace is one of the
companies in that space, explains chief
executive Giorgio Patrini, as he calls
from the company’s Amsterdam head-
quarterstodemonstrateitssystem.
The video on my screen looks rather
like an earlier version of Windows
Movie Maker but in navy corporate col-
ours. On the right-hand side are four
videos. Among them is the now-famous
deepfake ofMark Zuckerberg roducedp
for a political art installation and
anotherofRowanAtkinson’sface uper-s
imposedonDonaldTrump’sbody.
The video Patrini drags over to the
left-hand side is unfamiliar, however,
andcomesfromaTaiwanesechannelon
YouTube. It features a young woman
smiling and talking to the camera. Over
the connection, the video comes across
as slightly choppy, but hardly enough to
suggestsomethingisoutoftheordinary.
When he hits play, a red box playing
over her features, flashing percentages,
reveals that it is a fake: a fan wearing a
K-Pop singer’s face. “We’ve seen a cou-
ple of things that are coming out on
Asian markets and are being sold for
pennies on these digital marketplaces,”
says Ajder. This video is harmless, but
Patrini says that (female)K-Pop singers
havebecomemajortargetsoffakeporn.
PatriniexplainsthatDeeptrace’stech-
nology is trained on the thousands of
deepfakes that the company has pieced
together from across the internet. “We
have, to the best of our knowledge, the
largest network of fake videos out
there,”hesays.
The job is a continuous one, though.
“Things a year ago don’t compare,” he
says. “We see thousands of people con-

Continuedfrompage 18

tributing to small tweaks to the technol-
ogyonGitHub,doingitasahobby.”
Farid, the Berkeley professor, is also
working on detection, focusing prima-
rily on public figures, including world
leaders. His system analyses hours of
video of their conversations, including
interviews, speeches and rallies. From
there, it focuses on the specific idiosyn-
crasies of their speech patterns and
expressions.
“When Obama would deliver bad
news, he would frown, his brow would
furrow and he would slightly tilt his
head downwards,” he says. “If he was
being funny he would smile and look up
to the left... Everyone has a different
cadence to how their expressions
change.”
For now, Farid thinks he is one step
ahead of the deepfake producers. “I’m
analysing eyebrows and head move-
ments,” he says. “The GANs don’t know
that there’s a face there.” In the long
term, however, he is pessimistic. “It’s an
arms race and, at the end of the day, we
know we’re going to lose — but we’re
going to take it out of the hands of the
amateur and move it into the hands of
fewerpeople.”
Dr Wael Abd-Almageed, a senior sci-
entistattheUniversityofSouthernCali-
fornia, represents yet another attempt
at detection. His work strings together
several video frames to verify whether
an image is a deepfake. Nevertheless, he
is quick to acknowledge that his
research may unintentionally feed into
improvingfuturedeepfakes.
“My anticipation is the people who
create them will see our paper and try
and improve their methods to fool our
detector,” he says. “If you think deep-
fakes are a problem now, they will be
muchharderinthenextcoupleofyears.”

The second method of combating
deepfakes focuses on improving trust in
videos. Truepic, a San Diego-based
start-up, has been trying to fight manip-
ulated videos and photos for four years,
with experts such as Farid on its advi-
soryboard.
Jeffrey McGregor, Truepic’s chief
executive, says the company launched
in response to a spate of manipulated
pictures online. “Deepfakes will forever
be generated... What Truepic is aim-
ing to do instead of detecting them, is
establishingtruth.”
Truepic has produced a camera app
for everyday usage. “When you tap on
that shutter button, we’re capturing all
of the geospatial data — GPS sensors,
barometricpressures,theheadingofthe
device and securely transforming that
toTruepic’sverificationserver.”
There, the company runs tests to
check whether the image has been
manipulated. If it has not, Truepic
uploads a verified version to its website,
whichcanbesharedwithotherparties.
McGregor says that Truepic has
already found business uses with insur-

ers and lenders. The company is also
working with NGOs who have a particu-
larneedforverifiedimages.“Oneexam-
pleistheSyrianAmericanMedicalSoci-
ety — they’ve used Truepic to document
someoftheeventsthatarehappeningin
Syria,”hesays.
Amber, a San Francisco-based
start-up,producesdetectionsoftwareas
well as Amber Authenticate, a camera
appthatgenerates“hashes”—represen-
tations of the data — that are uploaded
to a public blockchain as users shoot a
video. If the veracity needs to be
checked — for example, in a courtroom
— any differences between hashes can
show whether it has been tampered
with. “Video or audio being used as evi-
dence should not be operating in proba-
bilities,” says Shamir Allibhai, Amber’s
chiefexecutive.

Yet while entrepreneurs and academ-
ics can produce software to fight deep-
fakes,social-mediagiantsmustalsotus-
sle with them. YouTube told the FT that
it was aware of the issue and was work-
ingonit.Thevideoplatformdidremove
thealteredvideoofPelosi.
It remains unclear, however, what
policies it might invoke that could stop
users taking parody videos and repost-
ing them as if they were real, as with the
Renzideepfake.
Of the Big Tech companies, it is Face-
book that has started to take the lead in
looking for technical solutions. In Sep-
tember, it announced the launch of the

Deepfake Detection Challenge along-
side Microsoft, academics in the US and
UK, and an industry consortium called
thePartnershiponAI.
“Better late than never,” says Farid,
who is among the scientists involved.
“Yes, this is good — YouTube and Twit-
tershouldbedoingthistoo—butthere’s
a second part, the policy issue.” He
points to the altered video of Nancy
Pelosi uploaded to Facebook as a prime
example of this dimension. “Facebook
knew it was fake within seconds. They
also said, ‘We are not the arbiters of
truth.’”Thevideostayedup.
In itsrefusal to act, acebook strivedF

to stay within the limits of the Commu-
nicationDecencyAct,Section230—leg-
islation from 1996. It counts websites as
platforms rather than publishers, to
promote free speech, but has come
under increasing criticism for seeming
to enable companies to avoid liability
for the content they host. The result
is an often piecemeal approach to con-
tentissues.
Electoral systemsare also lagging. A
spokesperson for the UK’s Electoral
Commissionsaidthatdeepfakesarejust
onechallengeposedbytheriseofdigital
campaigning. While printed material is
required by law to have imprints show-
ing authorship, this does not apply to
electronic content — a potentially dan-
gerousloophole.
One way to deal with this would be
through enacting clear regulation.
MutaleNkonde,afellowattheBerkman
Klein Center at Harvard University, was
among those involved in helping draft
the Defending Each and Every Person
from False Appearances by Keeping
Exploitation Subject (DEEPFAKES) to
AccountabilityAct.
“It became incredibly important to
enter a piece of legislation,” she says. “As
we move towards 2020, we may be sub-
ject to supposed video evidence and we
need a way of identifying what may look
real [but is not].” She says that there are
fearsthatbothChinaandIrancouldturn
todeepfakesasatooltoattacktheUS.
Yet these dangers have to be dealt
with in the framework of Section 230.
The compromise for Nkonde and her
colleagues was to treat deepfakes as a

consumer-rights issue, making it about
fraudulent representation. The DEEP-
FAKES Accountability Act, referred to
the subcommittee on Crime, Terrorism
and Homeland Security in June, would
make deepfakes for purposes such as
fake porn, disinformation or election
interferenceillegal.
Those synthetic videos produced for
purposes such as parody or education
would need to be watermarked. But
Nkonde says that even as someone who
helped draw up the bill, she now ques-
tionsitsfeasibility.
“The issue with watermarking... is
the technical architecture completely
changes the video,” she says. “It’s a com-
pletely new piece of video.” Trying to
prove something is a fake without
reference to the “real” footage would be
extremelyhard.
She also worries that watermarking
would lead to false positives, or
that canny developers could try to have
real videos flagged as deepfakes. “We
may end up having to actually favour
some type of ban or moratorium until
we get further research in all the differ-
entways[videos]couldbefalsified,”she
suggests. “We’re falling foul to how fast
techismoving.”
While the rate of progress is astound-
ing, experts remain unconvinced about
a deepfake apocalypse in the political
sphere. It is the plausible deniability the
technology offers that remains its great-
est power. Doermann at the University
of Buffalo says that in the US at least,
where public awareness of the technol-
ogy is growing, an extremely high-qual-
ity deepfake would be needed to change
the course of electoral history in


  1. “It would take a massive amount
    of computing power. It’s not going
    to be a rogue person, it would be a
    nationstate.”
    Ajder is also willing to admit that as
    with other advances in AI, scepticism
    around the technology is fair. But he is
    certain, nevertheless, that deepfakes, as
    a form of disinformation, are danger-
    ous. “They appeal to a different kind of
    truth,” he says. “The concept of truth
    has never been as solid as we like to
    think.” Voters who see a video of a poli-
    tician behaving in a way they expect
    them to might understand it is a fake,
    but believe it represents an underlying
    realitytotheircharacter.
    We no longer have the luxury of“
    [deciding] when we can suspend our
    reality,”Ajderconcludes.
    That is evident in the response to the
    altered video of Pelosi. Many comments
    on The Washington Post’s explainer on
    YouTubereflecttheviewthatthespecif-
    ics of the video do not matter. “She still
    sounds drunk and really messed up at
    normal speed... can’t hear no big dif-
    ference,” says one user. That comment
    received564likes.


SiddharthVenkataramakrishnanisanFT
leaderwriter

T


he warning klaxon has
been sounding for years:
artificial intelligence (AI)
will transform the job mar-
ket. The conversation in
the tech world has now turned from try-
ing to convince people of this future to
analysing how and when machines will
enter the workforce independently, and
what that will mean for humans.
A lot of research has been devoted to
pinpointing jobs at risk versus those
considered “safe”, and to defining the
skillsrequired in this new world.
We know that algorithms can aug-
ment human decision-making by find-
ing unexpected correlations in large

data sets. This will allow machines to
displace some jobs that require rote
tasks, assist on other tasks and, ulti-
mately, create some completely new
jobs, such as AI data labellers or algo-
rithm designers.
Fewer studies have been done on how
this might play out in gender terms, but
many futurists are optimistic about
women’s chances of surviving in this
new job market. Why? Some of the
most valued skills in an AI-assisted
workforce will be in areas where robots
fall short: social and emotional intelli-
gence (EI, as opposed to AI).
A report from consultants Capgemini
found that 83 per cent of organisations

surveyed believe an EI workforce will be
a requisite for success in coming years.
If the labour market values empathy,
multitasking, collaboration and com-
passion (attributes historically associ-
ated with women), then women, so the
argument goes, may stand a better
chance of being recruited and retained.
But there is a twist. To flourish in an
AI-driven workforce, a significant
degree of retraining and adaptation will
be required.
The real risks for the female work-
force lie not in the types of jobs they are
suited for, but in whether they can
make smooth job transitions. Will
women be able to reskill and network
their way into new and adjacent occu-
pations as labour demands shift?
A June report from the McKinsey Glo-
bal Institute examined this question in
10 countries, including six developed
economies and four emerging econo-
mies. They found that women and men
face a similar scale of potential job
losses and gains, with an average of 20
per cent of female workers, or 107m
women, potentially being displaced by

automation by 2030, compared
with 21 per cent for men.
But the report also suggested
that up to 160m women worldwide
(and 274m men) may need to tran-
sition across occupations to remain
in work. “If women take advantage
of transition opportunities, they
could maintain their current share
of employment; if they cannot,
gender inequality in work could
worsen,”theauthorswrote.
The adaptation challenges for
womenare multi-faceted — and
not necessarily related to whether
they are suited to more technical
jobs such as algorithm-writing and
data-modelling.
In emerging economies such as
India, for example, new occupations
could require a secondary education,
posing a challenge where female educ-
ation rates continue to lag behind
men’s. Another result of more auto-
mation will be a demand for labour
mobility and flexibility, with workers
required to travel between occupations,
employers, sectors or even countries.

However, women are often less
mobile than men “because they spend
so much more time than men on unpaid
care work — more than 1.1 trillion hours
a year, compared with under 400bn
hours for men,” the report said.
In addition, women often find them-
selves excluded from leadership net-
works that allow men to enhance their

skills, find mentors and identify
job opportunities. This is some-
times the result of caring responsi-
bilities: collecting a child from
nursery or visiting an elderly par-
ent may be prioritised over post-
work drinks in the pub.
But it also reflects an uncon-
scious bias that still causes many
people to associate leadership
traits with men.
So while we can celebrate the
notion that the age of machines
may put a value on so-called “soft
skills” such as empathy and com-
munication, we will need to con-
tinue to focus on the banal practi-
calities of work in 2030 if women
are to be included.
Ultimately, it comes back to the same
challenges we have today: will women
have access to the same networks, edu-
cation and opportunities that their male
counterparts do? If not, an AI world
won’t look so different after all.

MadhumitaMurgiaistheFT’s
Europeantechnologycorrespondent

When networks rule


ech WorldT Could women be better placed to


thrive in a workforce transformed by AI? Or, asks


Madhumita Murgia, will the old biases still apply?


âtéP

‘There are communities


that haven’t (even) reaped
the benefits of the first

industrial revolution.
They do not have the

know-how to begin to
understand that a

computer can create this’


Vidya Narayanan, researcher,
Oxford Internet Institute

From top: a deepfake
video of former Italian
prime minister Metteo
Renzi, broadcast on a
satirical TV show last
month; created as a
political art installation,
this deepfake of Facebook
CEO Mark Zuckerberg
went viral when it was
posted in June; in this
videothe face of British
comedian Rowan Atkinson
is superimposed on
Donald Trump’s body
ouTubeY

Getty Images

OCTOBER 26 2019 Section:Weekend Time: 25/10/2019- 15:47 User:adrian.justins Page Name:WIN19, Part,Page,Edition:WIN, 19, 1

Free download pdf