Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
Terence C. Mills and Kerry Patterson xxi

and applied econometrics. No longer is it the case that the mainstay of empirical
analysis is a set of macroeconomic time series, often quite limited in sample period.
Earlier chapters in this part of the volume emphasize that the data sources now
available are much richer than this, both in variety and length of sample period.
As Jacho-Chávez and Trivedi note, the electronic recording and collection of data
has led to substantial growth in the availability of census and survey data. However,
the nature of the data leads to problems that require theoretical solutions: for exam-
ple, problems of sample selection, measurement errors and missing or incomplete
data. On the computing side, the scale of the datasets and estimation based upon
them implies that there must be reliability in the high-dimensional optimization
routines required by the estimation methods and an ability to handle large-scale
Monte Carlo simulations. The increase in computing power has meant that tech-
niques that were not previously feasible, such as simulation assisted estimation
and resampling, are now practical and in widespread use. Moreover, nonparamet-
ric and semiparametric methods that involve the estimation of distributions rather
than simple parameters, as in regression models, have been developed through
drawing on the improved power of computers. Throughout the chapter, Jacho-
Chávez and Trivedi motivate their discussion by the use of examples of practical
interest, including modeling hedonic prices of housing attributes, female labor
force participation, Medicare expenditure, and number of doctor visits. Interest-
ingly, they conclude that there are important problems, particularly those related
to assessing public policy, such as identification and implementation in the con-
text of structural, dynamic and high-dimensional models, which remain to be
solved.
In Part VI, the theme of the importance of economic policy is continued, but
with the emphasis now on monetary policy and macroeconomic policy, which
remain of continued importance. Starting in the 1970s and continuing into the
1990s, the development of macroeconometric models for policy purposes was a
highly regarded area; during that period computing power was developing pri-
marily through mainframe computers, allowing not so much the estimation as the
simulation of macroeconomic models of a dimension that had not been previously
contemplated. Government treasuries, central banks and some non-governmental
agencies developed their own empirical macro-models comprising hundreds of
equations. Yet, these models failed to live up to their promise, either wholly or in
part. For some periods there was an empirical failure, the models simply not being
good enough; but, more radically, the theoretical basis of the models was often
quite weak, at least relative to the theory of the optimizing and rational agent and
ideas of intertemporal general equilibrium.
In Chapter 16, Carlo Favero expands upon this theme, especially as it relates to
the econometrics of monetary policy and the force of the critiques by Lucas (1976)
and Sims (1980). A key distinction in the dissection of the modeling corpse is
between structural identification and statistical identification. The former relates to
the relationship between the structural parameters and the statistical parameters in
the reduced form, while the latter relates to the properties of the statistical or empir-
ical model which represents the data. Typically, structural identification is achieved

Free download pdf