Financial Times Europe - 09.11.2019 - 10.11.2019

(Tuis.) #1

14 ★ FT Weekend 9 November/10 November 2019


COMPANIES


But the system needs to be trained.
The more datafed into it — whether
images of terrorist insignia or harmful
keywords — the more the machine
learning technologyimproves. Without
enough training data, the system does
not know what to look for.
A recent example was when Facebook
said it did not have enough first-person
shooter video footage for its algorithms
to recognise and take down the videos of
theattacks on two mosques in New Zea-
land arlier this year.e
Facebook has now equipped London
police with body cameras during terror-
ist training exercises to get more foot-
age, having eschewed using footage of
video game shoot-outs or paintballing.
According to Mr Schroepfer, its
numerous data sets will typically be
made up of tens of thousands — or even
millions — of examples to learn from.

These should include not just precise
examples of what an algorithm should
detect and “hard negatives”, but also
“near positives” — something that is
close but should not count. For exam-
ple, for image recognition of a water bot-
tle, the system should classify hand san-
itiser as near positive.
Facebook will typically train its AI on
content posted by its users, as well as
publicly available data sets. When it
comes to images and memes, data sets
can be created to take into account the
fact that some people will doctor an
original in order to evade detection.
The company has regional human
moderators who are told to stay alert for
new tricks and has external partners.
The University of Alabama at Birming-
ham, for example, is helping Facebook
keep abreast of newly emerging street
names for drugs.

“In a lot of cases, this is an adversarial
game,” Mr Schroepfer said. “[Adversar-
ies] are trading tips and tricks like, hey,
if you just cut the video like this, put a
border around it, you can repost it with-
out detection,” he added.
For text, there are multiple languages
to account for, and those that are less
common are harder for the computer to
understand.
“Myanmar — we know we need to do a
better job there,” said Guy Rosen, Face-
book’s vice-president of integrity. Last
year the company facedcriticism for
being too slow to clamp down ongroups
inciting violence. “There’s not a lot of
content in the world in Burmese, which
means there’s not a lot of training data.”
Facebook is now translating watch-
words across multiple languages but the
system is better at spotting the sort of
language used by groups designated as

terrorists by the UN, such as Isis or al-
Qaeda, according to Sasha Havlicek,
chief executive of the Institute for Stra-
tegic Dialogue, a London-based think-
tank that specialises in violent extrem-
ism and terrorism.
This means that “the internet compa-
nies haven’t quite caught up to the far-
right challenge yet”, she said.
Experts warn that AI still falls dra-
matically short when it comes to polic-
ing “grey area” content, particularly
hate speech or harassment, that
requires understanding of nuance or
knowledge of the latest slang.
Already it is a divisive area — Face-
book is in the middle of creating aninde-
pendent content moderation “Supreme
Court”, where users can challenge an
individual content decision if they
believe it to be unfair.
One attendee at Facebook’s annual
shareholder meeting complained, for
example, that the company had banned
her from selling T-shirts on the site with
slogans such as “Men are Trash”, which
were deemed dehumanising under
Facebook’srules.
Meanwhile, it is close to impossible
for current algorithms to detect some of
the wider context around slurs for
example, such as whether they are said
in jest or not, as reclamation or as con-
demnation. “When the level of subtlety
goes up, or context goes up, the techni-
cal challenges go up dramatically,” Mr
Schroepfer said.
One solution is to assess other signals,

such as a user’s behavioural patterns on
the platform, or the comments in
response to a post.But the company
argues that AI will always need humans
for the labelling and review of this type
of borderline content.
“People often pose this as an opposi-
tional thing is like people versus the
machines,” said Mr Schroepfer. “I think
of it as human augmented.”
Some researchers argue that Face-
book’s entire strategy is misguided.
Instead, it should focus on how its news
feed algorithms serve content to users.
“The algorithms are designed to show
you things it thinks are of interest,
designed to keep you on the platform
longer,” said Joan Donovan, director of
the Technology and Social Change
research project at the Harvard
Kennedy School, who specialises in
online extremism and media manipula-
tion. “In doing that, they tend to move
closer to content that is outrageous, that
is novel.”
Ms Havlicek adds: “From the outset
we have said the playing field is not
level. It’s meaningless if there is a struc-
tural imbalance in relation to amplifica-
tion of extreme messaging. If you don’t
address the underlying tech architec-
ture that amplifies extremism through
the algorithmic design, then there is no
way to outcompete this.”

FT graphic Source: Facebook

Incidents (m)






















Bullying and harassment

Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q

















Hate speech
Incidents (m)

Facebook moderation


 of violations Facebook found before users reported them

     


Fake accounts
Incidents (bn)
















        


H A N N A H M U R P H Y— SAN FRANCISCO
M A D H U M I TA M U R G I A— LONDON


Facebook faces a monumental chal-
lenge: how can its 35,000 moderators
watch over billions of posts and com-
ments every day to sift out abusive and
dangerous content?
Just 18 months ago, Mark Zuckerberg,
Facebook’s founder, was confident
that rapid advances inartificial intelli-
gence ould solve the problem. Com-w
puters would spot and stop bullying,
hate speech and other violations of
the site’s policies before they could
spread.
But while the company has made sig-
nificant advances, the promise of AI still
seems distant. In recent months,Face-
book as suffered high-profile failuresh
to prevent illegal content, such as live
footage from terrorist shootings, and Mr
Zuckerberg has conceded that the com-
pany still needs to spend heavily on
humans to spot problems.
“There’s just so much content flowing
through the system that we do need a lot
of people looking at this,” he said.
In interviews, Facebook’s executives
in charge of developing moderation
software and outside experts said that
therewere persistent, and perhaps
insurmountable, challenges. These
include finding the right data to train
artificial intelligence algorithms, devel-
oping programs that understand
enough nuance and context to spothate
speech, and outsmarting human adver-
saries who keep learning how to game
the system.
“We’re pushing the frontier,” said
Mike Schroepfer, Facebook’s chief tech-
nology officer. But where there have
been grave mistakes, “the technology
was just not up to what we do”.
In its earlier days,Facebook elied onr
its users to report objectionable con-
tent, before human moderators would
review the material and decide whether
to take it down or not. But over the past
five years or so, Facebook has built a
team of “hundreds” of machine learning
experts, engineers and data scientists to
develop algorithms that can automati-
cally flag unwanted content.
According to Mr Schroepfer, technol-
ogies for image recognition — which
were unreliable before 2014 — are now


“stunningly good”. Language under-
standing, which was introduced for hate
speech in 2017 for example, is improv-
ing, but still fairly nascent as algorithms
struggle to account for context.
“If you have to sit and stare at a prob-
lem and do a bunch of internet
research... and it’s going to take you 10
minutes, I don’t have a lot of hope that
AI is going to understand that in the
next 12 months,” he said. “But if you
could sit there and do it in five to10 sec-
onds — we’re getting to the point where
AI systems are probably going to be bet-
ter than you at that.”
The use of these algorithms comes as
a spate of media reports have high-
lighted the devastating effects on the
mental health of content moderators,
many of whom are low-paid contrac-
tors, of having to sift through disturbing
content to remove it.


Facebook delayed at ‘frontier’ of AI policing


Human monitors prove essential for spotting abuse as US group’s tech drive advances but remains short of target


Long view:
technologies
for image
recognition —
which were
unreliable
before 2014 —
are now
‘stunningly
good’, according
to Mike
Schroepfer,
chief technology
officer FT—
Montage/Reuters

Eye on the prize


‘If you could sit there and


do it in five to10 seconds —


AI systems are probably


going to be better at that’


‘When the level of subtlety


goes up, or context goes
up, technical challenges

go up dramatically’


More on ft.com
For the latest news, comment
and analysis on Facebook and
the technology sector, go to
ft.com/technology

Mark
Zuckerberg,
Facebook’s
founder, was
confident 18
months ago that
AI would solve
content
problems

NOVEMBER 9 2019 Section:Companies Time: 11/20198/ - 16:45 User:cathy.pryor Page Name:CONEWS3, Part,Page,Edition:EUR , 14, 1

Free download pdf