The Observer - 25.08.2019

(Rick Simeone) #1




Racist abuse in the real


world is in decline, so


why not on Twitter?


As I have found to my cost, the


social media platform is still


failing abysmally to root out its


vilest and most hateful users


Illustration
by Dominic
McKenzie

How can children
thrive when they
are squeezed into
cramped  homes?
Barbara Ellen
Page 51

43


Britain is a much less racist


country than the one I grew up in. Yet I have received
more racist abuse in the last four weeks than the
previous 40 years. This paradox can be explained in a
single word: technology. More specifi cally , Twitter. It
helps to explain why, last week, social media once again
became the key battleground for tacking racism.
Growing up, my experiences of racism were pretty
banal. If you called me a “paki” in the playground, you
could expect a sarcastic lecture about the need to get
an atlas, as my dad is from India. Nobody ever hit me
but as a football-mad teenager in the 1980s, I heard
shocking racism, which was eventually stamped out
by the mid-1990s. Monkey chanting in the ground has
given way to hatred on social media – see the racist
abuse against Paul Pogba of Manchester United and
other leading black players which hit the headlines
last week.
It is more than 20 years since any one was racist to
my face and I would rarely encounter racism at all if I
w eren’t on Twitter. But I am an addict. The platform can
even tell me that I have sent more than 180,000 tweets
over the last 11 years.
I’ve found it an enormously positive experience,
although worrying about the growing incivility of public
discourse led me to invent #positivetwitterday in 2012,
promoting it in an unusual alliance with blogger Guido
Fawkes. This annual fi xture, on the last Friday in August,
challeng es tweeters to behave civilly for at least one
day, a symbolic way to deepen the conversation about
what we can all do to shape the social media culture
that we want. I’m pleased that this year Twitter is also
supporting it.
So users can shift the tone of online discourse,


but the racist fi restorm I’ve experienced this month
demonstrates why social media companies must play
their part.
Ironically, this began when I shared some good news
about progress on race: an Observer report on research
that nine out of 10 people don’t think you have to be
white to be English. That sort of social change is a
welcome message for, well, about nine out of 10 people.
Several hundred people retweeted and liked the tweet,
but I also heard how angry the most toxic members of
that shrinking racist minority felt about it.
I decided to report the racial abuse that I received to
Twitter’s systems, something I had never done before.
The results were illuminating. I reported about 50 racist
users that weekend. About a third were deemed out of
order; two-thirds were judged to be OK.

What sorts of racism does Twitter let users get
away with? You probably won’t get away with calling
somebody “nigger” , but I was told by Twitter that “You
are not British, parjeet – you people are shitting in the
street” was acceptable. I emailed back to ask what more
the user had to say to break the “hateful conduct” rules.
A response just said it had been checked and upheld. I
wondered how often a human being was reading my
messages – and how often an algorithm.
Getting these boundaries right is diffi cult. I want
Twitter to do more against racists, but I don’t think they
should ban Donald Trump, for example – though some
of his tweets might have to go. With the aim of at least
educating users, I began a new hashtag, #doesnotviolate ,
to promote transparency about what gets allowed.
Twitter says it abhors racism on the platform, but
its current rules permit racism and racist speech, only
banning users who promot e violence or make threats or
harassment on racial grounds. It did tighten its anti-
hatred policies this month and dehumanising tweets
against faith groups now violate the rules. Twitter gave
examples: “We need to exterminate the rats; the Jews are
disgusting” would now be out of order. I was astonished
that it wasn’t already. Yet say exactly th e same thing
about black people and it’s still OK. That is, tweets
dehumanising racial groups are deemed by Twitter to
still be acceptable. Changing their policies on this is an
urgent necessity.

New rules won’t help if Twitter


can’t enforce them. Solidarity from other tweeters
included sending me astonishing evidence of just how
repeatedly those orchestrating harassment of me had
been banned. A virulent antisemite using the handle
“Noxious Jew” openly boasts about being the same
banned user. Twitter allowed him to reregister dozens
of his vile variations, such as “Fetid Jew”, “Pungent
Jew”, “Malodorous Jew”, with messages sarcastically
hashtagged #myfi rsttweet asking his racist network to
rebuild his audience. I sent that to Twitter’s enforcement
team and got an email back saying it could not fi nd any
violation of its rules.
Racists banned from football grounds can’t just
turn up again the following weekend, but hundreds
of virulent racist accounts banned from Twitter
openly celebrate how easily they fl out that. Twitter
needs to combine technology and human capacity to
sort this out.
Is reporting online racism futile? “Just block and
move on,” I was told. But the answer to racism in football
grounds wasn’t earplugs for black and Asian fans. We
changed the culture at the grounds. It is a mistake to
think of “online” and “offl ine” as hermetically sealed
worlds. If we don’t emulate the progress we made
in playgrounds and stadiums on social media this
organised effort to relegitimise racism on social media
will leak back into society.
Phil Neville, manager of the England’s women’s team,
proposes a six-month boycott by footballers. But I don’t
want to leave the platform. Instead, it must be time for
Twitter to show racism the red card and to mean it.

Sunder Katwala is director of British Future

Sunder


Katwala

Free download pdf