Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
John DiNardo 149

and Luttmer, 1997). If the methodology works as intended, then when perform-
ing the same analysis on data from cities without rent control – they consider
Chicago, Illinois, and Hartford, Connecticut – they should consistently estimate
no welfare loss due to rent control. Contrary to such a presumption, however,
for both cities they estimate large amounts of apartment misallocation (although
these estimates are smaller than their estimate for New York City) that are statisti-
cally quite precise. Indeed, they acknowledge that “strictly interpreted, the results
reject the identifying assumptions. In both cities, the procedure finds statistically
significant misallocation” (Glaeser and Littmer, 2003, p. 1044). Nonetheless, while
there are differences between the two versions of the paper, it is not clear whether
such a rejection played any role in changing the inferences they draw. Indeed, they
argue: “While this is disturbing, the large difference between our New York results
and the results for these placebo cities suggests that even though our identifying
assumptions may not exactly be true, the failure of the assumptions is unlikely to
fully account for the observed misallocation in New York” (ibid., p. 1044). What
does it mean to say a set of assumptions fails to “fully account for the observed
misallocation” and why should the results be viewed as “disturbing” (if indeed
they should be)? In such a case, I think it is fair to say that we should have little
confidence that their proposedmethodologyhas a “truth preserving virtue.” Note,
however, this is only weakly related to one’s views about the merits of rent control
in New York City.
Much of the variation among non-Bayesians in their reaction to such statistical
information seems to involve the “primacy” of certain types of (economic)models.
Very roughly speaking, one can point to “a design-based approach” which focuses
on creating or finding situations which resemble “chance set-ups” and where an
analysis of the DGP proceeds separately from a single, specific, highly articulated,
theoretical economic model. Historically, this approach has been associated with
an emphasis on such issues as pre-specified analysis, “serious” specification testing,
replicability, avoiding “confounding,” identification, and so on.^67
By contrast, one can also identify at least one strand of so-called “structural
approaches” where there is little or no distinction between a DGP and a highly artic-
ulated “theoretical economic model.”^68 An archetypal example of this approach,
perhaps, is the multinomial logit of McFadden (1974) in which the consumer
choice model – utility function, specification of heterogeneity in tastes, and so
on – delivers a complete DGP in the form of a likelihood function. A feature of
such an approach is that, in principle, once the model has been estimated, one
can study “counterfactual policy simulations” or “experiments” which may have
never been performed but can be described within the model.
This line of research gave birth to further developments which have yielded a
wide variety of attitudes toward what might be called “severe testing.” At one
extreme, some researchers, such as Edward Prescott, apparently “completely reject
econometrics as a useful scientific tool. Instead [Prescott] promotescalibrationas the
preferred method for ‘uncovering’ the unknown parameters of structural models
and for evaluating and comparing their ability to fit the data” (Rust, 2007, p. 4).

Free download pdf