New Scientist - USA (2019-06-15)

(Antfer) #1
15 June 2019 | New Scientist | 15

THE pentaquark, a particle that
eluded physicists for decades before
finally being found in 2015, is
starting to give up its secrets.
New measurements from the
Large Hadron Collider (LHC), which
straddles the France-Switzerland
border, has revealed that the

pentaquark is made of two smaller
types of particle called a baryon and
a meson stuck together in a sort of
miniature molecule.
Quarks are the tiny, indivisible
particles that make up protons
and neutrons, which themselves
are baryons. “We are made out
of quarks, but somewhat simpler
collections of quarks than
pentaquarks,” says Tomasz
Skwarnicki at Syracuse University
in New York.
He and his colleagues working
on the LHCb particle detector made
the first unambiguous detection of
a pentaquark in 2015 by smashing
protons together. Now, thanks to
upgrades at the LHC, they have nine
times more data (Physical Review
Letters, doi. org/ c6x2).
They found that pentaquarks are
made of a baryon – a collection of
three quarks – stuck to a meson,
which is one quark and an antiquark.
“This is somewhat analogous
to how molecules are created out
of atoms,” says Skwarnicki.
We had never seen evidence that
baryons and mesons could stick
together before, although it wasn’t
unexpected. This is because we
know that baryons can adhere
to one another – if that wasn’t
possible, we wouldn’t be here. ❚

Particle physics

Leah Crane

A pentaquark
comprises a
baryon and a
meson stuck
together

AP
S/C


AR
IN^
CA
IN


Will civilisation end by 2050?
What you need to know about a controversial claim
newscientist.com/

Bizarre pentaquark
is a subatomic
‘molecule’

Artificial intelligence

Donna Lu

ARTIFICIAL intelligence that
can quickly generate paragraphs
in response to a simple prompt
can be used to churn out
convincing but untrue stories.
Paradoxically, these fake news-
generating AIs may also be the
best way to take on fake news.
Rowan Zellers at the
University of Washington
in Seattle and his colleagues
have created an AI called Grover
that can both write and detect
fake news. They trained it on
tens of millions of articles
from news websites totalling
120 gigabytes of data.
Grover learned to write
articles, adjusting its style to
mimic pieces published during
a particular time period or that
feature on a specific website,
depending on what it was
asked to do.
When given fake headlines
such as “No substantial
evidence for climate change”
or “Eating bread crusts actually
gives you curly hair”, within
seconds Grover spits out
articles complete with invented

statistics and faked quotes,
often from real experts or
politicians (see “An AI writes
for New Scientist”, below).
The team assessed the
ability of four AIs, including
Grover, to distinguish between
5000 real news articles written
by human journalists and 5000
fake ones generated by Grover.
Each AI faced two tests. In the
first, it was given a news article
and had to classify it as human
or machine-written. In the
second, it was presented with
two articles – one true, one
fake – and had to figure out
which was which.
To avoid a situation in which
both the news generator and
verifier models of Grover were
identical, the team trained
the generator for an eighth
longer so that it became
more sophisticated.
The best AI at identifying
Grover’s fake news articles was
the verifier version of Grover
itself, with an accuracy of
92 per cent. The next-best AI
had an accuracy of 73 per cent

(arxiv.org/abs/1905.12616).
The team didn’t test Grover’s
abilities on text generated
by other AIs.
Using Grover as a defence
against itself drew inspiration
from the computer security
concept of threat modelling,
says Zellers. “This is the idea
that in order to provide defences
to attacks you really need to
understand what adversaries
will do,” he says.
The team plans to share
Grover with other researchers
as a possible future contingency
against powerful AIs that are
able to generate fake news.

The approach differs from
that taken by OpenAI, the Elon
Musk-backed tech firm behind
a similar text-generating AI,
GPT-2. OpenAI announced
in February that it wouldn’t
publicly release the most
sophisticated model of its
AI out of concern it could be
used for malicious purposes.
“It seems quite possible that
the release of generators is
going to be quite harmful,” says
Open AI’s Jeff Wu. In the wrong
hands, it would equip people
with the ability to rapidly
generate tens of thousands
of articles, he says.
In the future, major websites
and platforms could run text-
checking AIs to filter for fake
content before it is shared, in
a similar way to how YouTube
already uses algorithms to
check for nudity and copyright
infringement on uploaded
videos, says Zellers. ❚

AI article writer could help


detect fake news


The following text was made
by Grover, an AI, (and edited for
brevity by a human) from the
headline “Eating bread crusts
actually gives you curly hair”.
Many people cook a bowl
of fresh bread the morning
after a hard night of tossing
and turning on the sofa.
With little thought, people
add the crust of the bread to
the mix of water and flour.
However, as one researcher
in Boston found out, this bread
ingredient is part of a diet that
makes hair grow curly.
The first slice of bread is

an important culinary cultural
marker. What should I make
from it? What are the
ingredients?
Eating the bread crust might
seem like no big deal. However,
in a less than 10-year study
conducted by Dr. Abdulhakim
El-Sayegh of Boston University,
his team found that in
European girls, one third of
girls had curly hair after eating
the leftover crusts of fried or
boiled bread and sandwiches.
The findings were presented as
early as 2014 in the American
Journal of Clinical Nutrition.

An AI writes for New Scientist


92 %

Accuracy with which an AI
could identify fake articles
Free download pdf