Science - USA (2021-12-10)

(Antfer) #1
NEWS | IN DEPTH

SCIENCE science.org 10 DECEMBER 2021 • VOL 374 ISSUE 6573 1311

CREDITS: (GRAPHIC: K. FRANKLIN/


SCIENCE


; (DATA REPRODUCIBILITY PROJECT: CANCER BIOLOGY


A

n ambitious project that set out 8 years
ago to replicate findings from top can-
cer labs has drawn to a discouraging
close. The Reproducibility Project:
Cancer Biology (RP:CB) reported this
week that when it attempted to repeat
experiments drawn from 23 high-impact
papers published about 10 years ago, fewer
than half yielded similar results.
The findings pose “challenges for the cred-
ibility of preclinical cancer biology,” says
psychologist Brian Nosek, executive direc-
tor of the Center for Open Science (COS), a
co-organizer of the effort. The project also
points to a need for authors to share more
details of their experiments so others can try
to reproduce them, he and others involved
argue. Indeed, vague protocols and uncoop-
erative authors, among other problems, ul-
timately prevented RP:CB from completing
replications for 30 of the 53 papers it had ini-
tially flagged, the team reports
in two capstone papers in eLife.
“It is useful to have this level
of objective data about how
challenging it can be to measure
reproducibility,” says Charles
Sawyers of Memorial Sloan Ket-
tering Cancer Center, who re-
viewed the designs and results
for some of the early replication
studies. But, he adds, “It’s hard
to know whether anything will
change as a consequence.”
Nosek’s center and the com-
pany Science Exchange set up
RP:CB in 2013, after two drug
companies reported they could
not reproduce many published
preclinical cancer studies. The
goal was to replicate key work
published by journals such as
Science, Nature, and Cell from
2010 to 2012. With foundation
funding, the organizers de-
signed replication studies that
were peer reviewed by eLife
to ensure they would faithfully
mimic the original experiments.
The project’s staff soon ran
into problems because all the
original papers lacked details
such as underlying data, pro-
tocols, statistical code, and

reagent sources. When authors were con-
tacted for this information, many spent
months tracking down details. But only 41%
of authors were very helpful; about one-
third declined or did not respond. Other
problems surfaced when labs began experi-
ments, such as cells that did not behave as
expected in a baseline study.
The project ended up paring the initial
list of 53 papers, comprising 193 key ex-
periments, to just 23 papers with 50 experi-
ments. They completed all replications for
18 of those papers and some experiments
for the rest; starting in 2017, the results
from each one have been published, mostly
as individual papers in eLife. All told, the
experimental work cost $1.5 million.
Results from only five papers could be
fully reproduced. Other replications yielded
mixed results, and some were negative or in-
conclusive. Overall, only 46% of 112 reported
experimental effects met at least three of five
criteria for replication, such as a change in

the same direction—increased cancer cell
growth or tumor shrinkage, for example.
But even when the effects reappeared, their
magnitude was usually much more modest,
on average just 15% of the original effect.
“That has huge implications for the success
of these things moving up the pipeline into
the clinic. [Drug companies] want them
to be big, strong, robust effects,” says Tim
Errington, project leader at the COS.
The findings are “incredibly important,”
says Michael Lauer, deputy director for ex-
tramural research at the National Institutes
of Health (NIH). At the same time, Lauer
notes the lower effect sizes are not surprising
because they are “consistent with ... publica-
tion bias”—that is, the fact that the most dra-
matic and positive effects are the most likely
to be published. And the findings don’t mean
“all science is untrustworthy,” Lauer says.
Indeed, labs have reported findings that
support most of the papers, including some
that failed in RP:CB. And two animal studies
that weren’t replicated by RP:CB
have led to promising early clini-
cal results—for an immunother-
apy drug and a peptide designed
to help drugs enter tumors.
Still, the findings underscore
how elusive reliable results can
be in some areas. Johns Hopkins
University infectious diseases
physician-scientist Cynthia
Sears, who reviewed papers on
links between gut bacteria and
colon cancer that were not fully
replicated, says simple differ-
ences, such as the local bacteria
in a lab animal quarters, can
sway results. RP:CB, she adds,
was “an instructive experience.”
If there’s one key message, it’s
that funders and journals need
to beef up requirements that au-
thors share methods and materi-
als, RP:CB leaders say. NIH data
sharing rules starting in January
2023 may help, Lauer notes.
Such rules may help but not
fully solve the problem, Sawyers
says. “In the end, reproducibil-
ity will likely be determined
by results that stand the test
of time, with confirmation and
extension of the key findings by
other labs.” j

By Jocelyn Kaiser

BIOMEDICINE

Key cancer results failed to be reproduced


Project to replicate high-impact preclinical cancer studies delivers sobering verdict


Disappointing numbers
Out of 53 prominent preclinical cancer papers, only 23 could be put to the test,
and many did not have clearly reproducible results.

* Incomplete experiments

Berger (2012)

Willingham (2012)

Hatzivassiliou (2010)

Tay (2011)

Sugahara (2010)

Liu (2011)

Castellarin (2012)

Lu (2012)*

Goetz (2011)

Lin (2012)

Johannessen (2010)*

Arthur (2012)

Garnett (2012)

Poliseno (2010)

Peinado (2012)

Sirota (2011)

Kan (2010)*

Ricci-Vitiani (2010)*

Dawson (2011)

Heidorn (2010)*

Vermeulen (2010)

Ward (2010)

Delmore (2011)

Citations of
all papers

12,

12,

6556

3364

Positive
Mixed
Negative
Uninterpretable

Replication outcome

34,
Free download pdf