Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
David T. Jacho-Chávez and Pravin K. Trivedi 777

brings out heterogeneity of individuals, firms, and organizations. Modeling such
heterogeneity is often essential for making valid inferences. While aggregation
usually reduces noise and leads to smoothing, disaggregation leads to loss of
continuity and smoothness, and increases the range of variation in the data. For
example, household average weekly consumption of (say) meat is likely to vary
smoothly, while that of an individual household in a given week may frequently
be zero, and may also switch to positive values from time to time. As Pudney
(1989) has pointed out, micro-data exhibit “holes, kinks and corners.” Dis-
creteness and nonlinearity of response are intrinsic to microeconometrics, and
they contribute to an increase in computational complexity relative to linear
models.


  • Model complexity. Empirical microeconometrics is closely tied to issues of
    public policy. Attempts to strengthen the policy relevance of models typically
    increase the complexity of the models. For example, one may be interested not
    only in the average impact of a policy change, but also in its distribution. The
    greater the heterogeneity in response, the greater the relevance of the latter. One
    feature of such complexity is that potentially models have high dimension. A
    multiple equation nonlinear regression model with scores of variables is not at
    all unusual.

  • Restrictions. Strong functional form and distributional restrictions are not
    favored. Instead, non- and semiparametric models, or flexible parametric
    models, are preferred.

  • Robustness. Models whose conclusions are not very sensitive to model-
    ing assumptions are preferred. Robustness comparisons between competing
    specifications are an important part of the modeling cycle.

  • Testing and evaluation. Model testing and evaluation, generally requiring
    additional computational effort, are integral parts of the search for robust
    specifications.

  • Asymptotic approximations. Traditionally, inference about economic rela-
    tions relied heavily on large sample approximations, but with additional compu-
    tational effort it is often possible to improve the quality of such approximations.
    Increasingly, applied econometricians engage in such improvements.

  • Computing advances. Breathtaking advances in the speed and scope of
    computing encourage applied econometricians to experiment with computa-
    tionally more ambitious projects. Avoiding a technique purely because it is
    computationally challenging is no longer regarded as a serious argument.


In this chapter we selectively and illustratively survey how computational advances
have influenced, and in turn are influenced by, the style and direction of modern
applied microeconometrics. Although it is fascinating to do so, we do not system-
atically take a historical perspective on econometric computation. The interested
reader is directed to Berndt (1991, pp. 1–10) and the articles by Renfro (2004b) and
Slater (2004). To limit the potentially vast scope of this article, we restrict the cover-
age to a few selected topics whose discussion is illustrated with specific data-based
examples. We do not cover the important developments in Bayesian computation
Free download pdf