Scientific American - USA (2021-03)

(Antfer) #1

Illustration by Jay Bendt March 2021, ScientificAmerican.com 73


OBSERVATORY
KEEPING AN EYE ON SCIENCE

Naomi Oreskes is a professor of the history of science
at Harvard University. She is author of Why Trust Science?
(Princeton University Press, 2019) and co-author
of Discerning Experts (University of Chicago, 2019).

When Experts


Get It Wrong


Accurate assessments are hard to


come by if you consult the wrong people


By Naomi Oreskes


“Experts are always getting it wrong” is now a familiar trope.
As a historian of science, I disagree: I think history shows that
scientific experts mostly get things right. But examples where ex ­
perts have gone wrong offer the opportunity to better understand
the limits of expertise. A case in point is the Global Health Secu­
rity Index (GHSI), the result of a project led by the Nuclear Threat
Initiative and the Johns Hopkins Center for Health Security. It
was published in October 2019, just weeks before the novel coro­
navirus made its appearance.
GHSI researchers evaluated global pandemic preparedness
in 195 countries, and the U.S. was judged to be the most prepared
country in the world. The U.K. was rated second overall. New
Zealand clocked in at 35th. Vietnam was 50th. Well, those
experts certainly got that wrong. Vietnam and New Zealand had
among the best responses to the COVID­19 pandemic; the U.K.
and the U.S. were among the worst.
In fairness, the study did not conclude that overall global pre­
paredness was good or even adequate. It warned that global


health security was “fundamentally weak” and that no country
was fully prepared for either an epidemic or a pandemic. The
COVID pandemic was equivalent to a giant fire before which al ­
most no one had done a fire drill. But while these experts got the
coarse­grained analysis right, they were grossly wrong in their
nation­by­nation assessment. As we now know, both the U.S. and
the U.K. have suffered death rates much higher than many coun­
tries that the GHSI rated as far less prepared. The study re sults
were so wrong in this regard that one post­hoc analysis conclud­
ed that it was “not predictive”; another dryly observed that it was
predictive but in “ the opposite direction.” So what happened?
The GHSI framework was based heavily on “expert elicitation”—
the querying of experts to elicit their views. (This method contrasts
with consensus reports such as those produced by the U.S. Nation­
al Academy of Sciences or the Intergovernmental Panel on Climate
Change, which are primarily based on a review of existing, peer­
reviewed publications.) Expert elicitation is often used to predict
risks or otherwise evaluate things that are hard to measure. Many
consider it to be a valid scientific methodology, particularly to estab­
lish the range of uncertainty around a complex issue or, where pub­
lished science is insufficient, to answer a time­sensitive question.
But it relies on a key presumption: that we’ve got the right experts.
The GHSI panel was understandably heavy with directors of
national and international health programs, health departments
and health commissions. But the experts included no profession­
al political scientist, psychologist, geographer or historian; there
was little expertise on the political and cultural dimensions of the
problem. In hindsight, it is clear that in many countries, political
and cultural factors turned out to be determinative.
Consider the U.S., a country with some of the most advanced
scientific infrastructure in the world and a prodigious manufac­
turing and telecommunications capacity. The U.S. failed to mobi­
lize this capacity for reasons that were largely political. Initially
the president did not take the pandemic seriously enough to orga­
nize a forceful federal response, and then, by his own admission,
he played it down. More than a few politicians and celebrities flout­
ed public health advice, appearing in public without masks well
after the evidence of their benefits had been communicated. Our
layered and decentralized system of government led to varied pol­
icies, in some cases putting state governments in conflict with their
own cities. And many re fused to practice social distancing, in ­
terpreting it as an unacceptable infringement on their freedom.
To evaluate American preparedness accurately, the GHSI group
needed input from anthropologists, psychologists and historians
who understood American politics and culture. In fact, it would have
had to grant social scientific expertise primacy because social fac­
tors, such as racial inequality, most strongly shaped the American
outcome. Around the globe, whether countries were able to mount
an effective pandemic response depended crucially on governance
and the response of their citizens to that governance. The GHSI
team got it wrong because the wrong experts were chosen.

JOIN THE CONVERSATION ONLINE
Visit Scientific American on Facebook and Twitter
or send a letter to the editor: [email protected]
Free download pdf