New Scientist - USA (2022-04-09)

(Maropa) #1
9 April 2022 | New Scientist | 47

showing that pictures of humans in emotional
situations could trigger neural activity in a
dead Atlantic salmon. The researchers had
used the fish to test that their scanner worked,
but then wrote up the faux “study” to show
how easy it is to get false results. “That was an
important message,” says Gould van Praag.
But MRI researchers still have problems.
A bombshell study found last month that
research on links between the brain’s structure
or function and complex mental traits – such
as cognitive abilities or risk of depression –
would need thousands of participants. The
average number of people scanned is 23.
Another way that the scientific record gets
distorted is publication bias: studies that find
a positive result are more likely to be published
than those that don’t. It isn’t just scientists
who tend to brush negative results under the
carpet – so do editors and peer reviewers of
journals. A less-talked-about kind of publication
bias is the preference for headline-generating
results. If you think journal editors should be
above caring whether their papers are exciting,
think again. The publishing industry has a
ranking system called impact factors to
indicate a journal’s kudos. Impact factors are
based on how many times each journal’s
papers have been cited in other papers. So
“groundbreaking” research that gets more
citations is highly prized.
The lure of fame has probably tempted some
into bad science, but over recent years, the
pressures have grown on scientists to produce
as many papers in high-ranking journals as
they can. One contributor to this ethos,
sometimes known as publish or perish, is the
increasingly competitive nature of academia,
with growing numbers of junior researchers
competing for limited jobs.
Its foundation is the way academic success
is measured. Universities and the bodies that
fund research – usually governments or
medical charities – want to assess scientists’
performance. That is hard, so they ask
scientists how many papers they have
published and what the journal impact factors
were. “We are incentivised to publish and get
grants,” says psychologist Marcus Munafò at
the University of Bristol, UK, who heads the UK
Reproducibility Network, a group of researchers
who are trying to improve things. “Those are
the things that get us jobs, get us promoted.
It doesn’t matter if the findings are wrong.”

confine my criticisms just to food research.
This state of affairs shouldn’t really be
surprising because the same causes of bad
science in psychology are turning up in many
places. Cherry-picking is increasingly possible
because of the vast amounts of data generated
in research these days, as a result of greater
computing power to store and analyse it.

Noisy numbers
The more complex the science, the more
data you get – like in brain scanning. Despite
producing compelling images showing
different parts of the brain “lighting up” in
response to different stimuli, what actually
comes out of an fMRI scanner is a mass of
numbers. Researchers must then work out
changes in brain activity from changes in
blood flow, while factoring in multiple other
variables. “We have to remove all of the noise
in the scanner, noise in the way that our body’s
responding... and tens of other complications,”
says neuroscientist Cassandra Gould van Praag
at the University of Oxford.
Initial brain scanning studies used too few
human subjects to ensure that real patterns
in the data stood out from the noise. Matters
came to a head in 2009 with a study apparently

MA

UR

O^ F

ER
MA

RIE

LLO

/SC

IEN

CE
PH

OT
O^ L

IBR

AR
Y

>

may not translate to people. Before this,
I didn’t realise that results in mice may not
even translate to other mice. Of course, not all
university biomedical research is wrong – it
has been the basis for many life-enhancing
inventions, not least vaccines against covid-19.
But I hate to think how many stories I have
written that gave people false hope. Cancer
research seems particularly hard hit, with a
project published in 2021 confirming how few
papers in this area can be replicated. All that
waste of researchers’ time, all those animals
that died for nothing, not to mention the
money, often raised by the public through fun
runs and bake sales.
If you think the solution is to pay attention
only to research in people, rather than in cells
or animals, forget it. Several fields of medical
research carried out in people also have
reputations for unreliable findings. Take
nutrition research, which aims to find out
which foods are healthiest. It has become a
standing joke that, depending on the day of the
week, you can find studies telling you that a
given food is bad for you or good. When I wrote
an article about this in 2019, saying we should
be sceptical of all nutrition studies, some
colleagues thought I must be overstating the
problems. If anything, my mistake was to


“ Biotech firm Amgen investigated


53 landmark papers and found that


only 1 in 10 could be replicated”


Medical research
using cells (left) and
animals (right) is
particularly likely
to be unreplicable
Free download pdf