John DiNardo 153
likely cause of the K/T mass extinction event was a collision between the Earth and a
large fragment from the Baptistina asteroid shower.”
- In this regard, it is notable that there is a considerable body of non-Bayesian decision
theory, the “Neyman–Pearson” framework being the best known. What is frequently
referred to as the “Neyman–Pearson” statistical framework, however, is rarelyexplicitly
invoked in most micro-empirical research, even though discussions about the “power”
and “size” of tests are sometimes themselves the subject of debate. See, for example,
McCloskey (1985), McCloskey and Ziliak (1996) and Hoover and Siegler (2008a, 2008b)
for one debate on the subject.
- There are many subtleties about the distinctions between beliefs and actions that I am
ignoring. For instance, in describing Pascal’s thesis, Joyce (1999, p. 21) – a “Bayesian” – is
“careful to formulate [the thesis] as anorm of rational desirethat governs the fair pricing
of risky wagers [those that obey conventional axioms of probability].” In doing so he is
explicit that he is making a statement aboutdesiresandnot actions(ibid., p. 19). “The
old guard still insists that the concept of a fair price can only be understood in terms of
behavioral dispositions, but it has become clear that the theoretical costs far outweigh
benefits.”
- Ragnar Frisch’s remarks, which accompanied Allias’ article, suggest a fairly heated debate
(emphasis added): “The problem discussed in Professor Allais’ paper is of an extremely
subtle sort and it seems to be difficult to reach a general agreement on the main points
at issue. I had a vivid impression of these difficulties at the Paris colloquium in May,
1952. One evening when a small number of the prominent contributors to this field
of study found themselves gathered around a table under the most pleasant exterior
circumstances, it even proved to be quite a bit of a task to clear up in a satisfactory way mis-
understandings in the course of the conversation. The version of Professor Allais’ paper,
which is now published inECONOMETRICAemerged after many informal exchanges
of views, including work done by editorial referees. Hardly anything more is now to
be gained by a continuation of such procedures.The paper is therefore now published as
it stands on the author’s responsibility. The editor is convinced that the paper will be a most
valuable means of preventing inbreeding of thoughts in this important field. – R.F.”
- By simple rearranging of terms,BAyields:
0.11U(500, 000)<0.10U(2, 500, 000)+0.01U( 0 ),
and fromCDwe get:
0.10U(2, 500, 000)+0.01U( 0 )<0.11U(500, 000).
Hence, a contradiction.
- For example, although games of chance greatly antedate anything resembling modern
notions of probability – “someone with only modest knowledge of probability math-
ematics could have won himself the whole of Gaul in a week” – anything like our
modern notions of probability did not “emerge permanently in discourse” until 1660 (see
Hacking, 1975, 1990). Perhaps not surprisingly, the history of probability is the sub-
ject of much debate as well. For one criticism of Hacking’s account see Garber and
Zabell (1979).
- Even at this point a fuller treatment would include a discussion of the problem of “logical
omniscience.” All I can do is cite a statement from Savage (1967): “For example, a person
required to risk money on a remote digit ofπwould have to compute that digit, in order
to comply fully with the theory [of personal probability], though this would really be
wasteful if the cost of computation were more than the prize involved. For the postulates
of the theory imply that you should behave in accordance with the logical implication of
all that you know. Is it possible to improve the theory in this respect, making allowance
within it for the cost of thinking, or would that entail paradox, as I am inclined to believe