Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
John DiNardo 109

Most of the oranges in the box are good.
Conclusion: The orange I randomly select will be good. (**)

This argument (*) is risky. Even if the premise is true, the conclusion may be
wrong; you may be unlucky and draw one of the few bad oranges.
While probability seems of little value for non-risky arguments such as (
), even
a non-Bayesian can easily see how probability might behelpfulfor arguments such
as (**). For example, if we know 90% of the oranges in the box are good, the
conclusion “There is a 90% chance that the orange I select will be good” seems
less risky than the conclusion “There is a 90% chance that the orange will be bad.”
Probability and statistics for the Bayesian can be viewed as a way to tame risky
arguments and make them amenable to the types of reasoning more commonly
found in situations requiring merely deductive logic.
As I discuss in section 3.4.2, a Bayesian is typically more comfortable thinking
about the probability of mostpropositions– which can be true, false, or uncer-
tain – than a non-Bayesian. The non-Bayesian is most comfortable thinking about
probability as the relative frequency ofevents. In the above example, neither the
Bayesian nor the non-Bayesian is that uncomfortable talking about the event of a
randomly chosen orange being good or bad. On the other hand, a non-Bayesian is
more likely to feel unclear about a statement like “There is a 90% chance that an
asteroid shower is the source of the Chicxulub impactor that produced the Creta-
ceous/Tertiary (K/T) mass extinction of the dinosaurs 65 million years ago.”^16 The
propositionthat “The mass extinction of the dinosaurs was caused by a piece of an
asteroid” is either true or false.^17 It is not a statement about relative frequency, or
the fraction of times that the proposition is true in different “worlds.”
The divergence between the two points of view becomes clearest when we begin
discussing propositions much more generally. If probability is understood as being
useful in induction – one version of the argument goes – it is a small step from this
example to considering probability as usefulwheneverone is faced with making a
risky decision. By these sorts of notions, most decisions inlifebecome subject to
the probability calculus because mostpropositionsthat are risky can and should be
reasoned about using probability.
Indeed, once you’ve moved from reasoning aboutbeliefsto reasoning aboutdeci-
sions, notions of “utility” can often become important. Many (including some
Bayesians) have difficulty with this step: the relationship between “beliefs” and
“actions” is not always obvious. I, for example, tend to think of them as rather
distinct.^18 I think of Voltaire’s quip – “I am very fond of truth, but not at all of
martyrdom” – as a (perhaps extreme) example of the possible divergence between
beliefs and actions. Hacking (1965, p. 16) observes that:


beliefs do not have consequences in the same way in which actions do...[For
example] we say that a man did something in consequence of his having certain
beliefs, or because he believed he was alone. But I think there is pretty plainly
a crucial difference between the way in which his opening the safe is a con-
sequence of his believing he was unobserved, and the way in which the safe’s
Free download pdf