New Scientist – August 17, 2019

(Martin Jones) #1
17 August 2019 | New Scientist | 7

Space Artificial intelligence


Ruby Prosser Scully Donna Lu


THE supermassive black hole at
the heart of our galaxy has started
flashing brighter than we have ever
seen it, and we don’t know why.
In the space of 2 hours, the
black hole’s brightness increased
75-fold – and astronomers believe
it was even more intense before
this observation began. This is the
brightest and most variable that
the black hole, Sagittarius A, has
been since scientists first started
studying it more than 20 years ago.
Tuan Do at the University
of California, Los Angeles, and
his colleagues observed this
unusual brightness with the Keck
telescope in Hawaii (arxiv.org/
abs/1908.01777).
The illumination comes from the
black hole heating the dust and gas
around it. Although it is 4 million
times our sun’s mass, Sagittarius A

is dimmer than many black holes.
Normally it flickers like a candle,
sometimes even too faint to see.
One possible explanation for the
new activity is that the black hole
is gobbling something up. “Maybe
more gas is falling into the black
hole and that leads to higher
amounts of accretion, which leads
to it being brighter,” says Do.
Last year, the star S2 orbited
close to the black hole. It may
have disturbed surrounding gas,
prompting more to fall in now
and heat up.
Another possibility is that the
change relates to the strange
gaseous object called G2 that
slingshotted around the black
hole in 2014. At the time,
some astronomers expected
fireworks and activity, only to be
disappointed. These may just have
been inexplicably delayed, says Do.
More observations will clear up
the mystery, he says. But we have a
limited window to observe the black
hole before the sun and Earth shift
and that part of the galaxy can’t be
seen for another year. ❚


Milky Way’s black


hole is suddenly


mysteriously bright


ARTIFICIALLY intelligent
systems that detect hate speech
have been found to show racial
bias. Such AIs automate the
huge task of filtering abusive
or offensive online content, but
they may inadvertently silence
minorities in the process.
Maarten Sap at the University
of Washington and his
colleagues found that AIs
trained to recognise online
hate speech were up to twice
as likely to mark tweets as
offensive when they were
written in African-American
English or by people who
identify as African American.
This includes Perspective, a
tool built by Google and sister
company Jigsaw. It assigns
toxicity ratings to text and is
used by organisations including
The New York Times to moderate
online discussions.

AIs that detect hate speech are
trained on examples that have
been categorised by humans as
being innocuous or offensive.
Sap and his colleagues
studied two commonly used
such data sets. They found a
correlation between tweets

written in African-American
English (AAE), a dialect spoken
primarily by black people in the
US, and the likelihood of their
being labelled by a human
as hate speech. This may be
because certain slang words
used inoffensively in AAE
vernacular can be insulting
in other contexts.
The team then trained two
AIs on some of these labelled
tweets and tested how they
categorised the rest. The worst-
performing AI falsely deemed

46 per cent of inoffensive AAE
tweets as offensive.
Testing the AIs on bigger data
sets, including one of 5.4 million
tweets where the authors had
identified their race, the team
found that tweets by African
Americans were 1.5 times
more likely to be labelled as
offensive. In another set of
56 million tweets, those written
in AAE were more than twice as
likely to be labelled as offensive
or abusive.
The researchers also tested
Perspective on these data
sets and found a correlation
between AAE dialect and
toxicity scores, suggesting that
it shows similar racial bias.
Perspective’s algorithm is
trained on millions of online
comments and picks up biases
that appear naturally in the
data sets, says Jigsaw’s chief
operating officer, Dan
Keyserling. “We are transparent
about these issues and we
constantly retrain our models to
help them get smarter,” he says.
Still, the use of biased AIs by
major platforms could cause
problems, such as suppressing
minority voices, says Sap.
“Because humans are
inherently biased, we have to
assume that all algorithms are
biased,” says Matthew Williams
at Cardiff University, UK.
Sap and his colleagues also
tested a way of mitigating bias
by asking humans to label
more than 1000 tweets as
offensive or inoffensive while
considering both the likely
dialect and race of the writer.
The result was a significant fall
in the likelihood of an AAE tweet
being marked as offensive.
The research was presented
at the Annual Meeting of the
Association for Computational
Linguistics in Italy. ❚

Google’s hate speech AI


may be racially biased


MA

SK
OT

/GE

TT
Y

“ Tweets by African
Americans were 1.5 times
more likely to be labelled
as offensive by AIs”

AIs can struggle to
tell hate speech from
vernacular language
Free download pdf