9781118041581

(Nancy Kaufman) #1
Other Applications 557

true and 50 false. Thus, Pr(cancerƒ) 1/51, or about 2 percent. Why is this
probability so low? It is because the disease is very rare in the first place.
Because the test is not quite perfect, the false positives tend to swamp the true
positives. Thus, the revised probability is much lower than intuition would sug-
gest. We should note that the test caused a large probability revision: a 20-fold
increase from 1 in 1,000 to 2 in 100. In this sense, the test is quite informative.
Thus, it may be very valuable in guiding subsequent medical treatment.
These examples are representative of a host of examples (used by econo-
mists and psychologists in their research) showing the systematic errors indi-
viduals make in predicting probabilities. A number of important conclusions
emerge from this research. First, individuals are overconfidentin their abilities
to make such predictions. Consequently, their prediction mistakes (large and
recurrent as they may be) always come as a surprise. (Even for professional
forecasters, the common saying is “often wrong, never in doubt.”) Second, indi-
viduals make mistakes in combining new and old information. In many cases,
individuals put too much weight on seemingly compelling information (the
psychological sketch or the positive medical test) and too little weight on the
underlying prior probability of the event in question. In other cases, they fail
to appreciate the weight that should be given statistical information (particu-
larly when based on large, random samples).
To sum up, the evidence on individual intuitive prediction delivers a cau-
tionary message. The use of formal analysis guided by Bayes’ theorem offers a
much better guide to probabilistic prediction.

On January 28, 1986, the space shuttle Challengerexploded 74 seconds after
takeoff, killing schoolteacher Christa McAuliffe and the six astronauts on
board. The presidential commission that investigated the disaster faulted a
series of decisions by NASA surrounding the flight.^4 Beset by escalating costs
and three previous launch delays, NASA went ahead with the January launch
despite the knowledge of potential risks.
Indeed, the commission’s principal criticism focused on NASA failures to
recognize and accurately assess key launch risks. The explosion was caused by
a blowout of the O-ring seal between two sections of the booster rocket.
Moreover, on the basis of their experience with earlier shuttle flights, NASA
and Morton Thiokol, the maker of the booster rocker, were aware of possible
O-ring problems. O-ring wear had been observed on 7 of the 24 previous shut-
tle launches. Of course, the rockets had always done the job on the previous
launches. So who could argue with a run of 24 successes?
If some O-ring damage was occurring, what was the proximate cause?
NASA scientists had a vague concern about the link between low launch

The Challenger
Disaster and
NASA’s Risk
Analysis

(^4) This account is drawn from a number of sources: W. Biddle, “What Destroyed the Challenger?”
Discover Magazine(April 1986): 40–47; D. L. Chandler, “NASA’s System for Assessing Risks Is
Faulted,” Boston Globe,March 5, 1988, p. 26; and J. E. Russo and P. J. Schoemarker, Decision Traps
(New York: Simon and Schuster, 1990), pp. 196–198.
c13TheValueofInformation.qxd 9/29/11 8:51 PM Page 557

Free download pdf