854 Macroeconometric Modeling for Policy
Economic analysis (1) is an indispensable guidance in the formulation of econo-
metric models. Clear interpretation also helps communication of ideas and results
among researchers, in addition to structuring debate. However, since economic
theories are necessarily simplifying abstractions, translations of theoretical to
econometric models must lead to problems such as biased coefficient estimates,
wrong signs of coefficients, and/or residual properties that hamper valid inference.
The main distinction seems to be between seeing theory as representingthecorrect
specification (leaving parameter estimation to the econometrician) and viewing
theory as a guideline in the specification of a model which also accommodates insti-
tutional features, attempts to accommodate heterogeneity among agents, addresses
the temporal aspects for the dataset, etc. (see Granger, 1999).
Arguments against “largely empirical models” include sample dependency, lack
of invariance, unnecessary complexity (in order to fit the data) and chance findings
of “significant” variables. Yet the ability to characterize the data (2) remains an
essential quality of useful econometric models and, given the absence of theoretical
truisms, the implications of economic theory have to be confronted with the data
in a systematic way.
We use cointegration methods on linearized and discretized dynamic systems
to estimate theory-interpretable and identified steady-state relationships, imposed
in the form of equilibrium correction models. We also make use of an automated
model selection approach to sift out the best theory-interpretable and identified
dynamic specifications. Hoover and Perez (1999), Hendry and Krolzig (1999) and
Doornik (2008) have shown that automated model selection methods have a good
chance of finding a close approximation to the data-generating process (DGP), and
that the danger of overfitting is, in fact, (surprisingly) low. Conversely, actingas
ifthe specification is given by theory alone, with only coefficient estimates left
to “fill in,” is bound to result in the econometric problems noted above, and to a
lower degree of relevance of the model for the economy it claims to represent.
In order to develop a scientific basis for policy modeling in macroeconometrics,
a new model’s capability of encompassing earlier findings should be regarded as
an important aspect of structure (3). There are many reasons for the coexistence
of contested models for the same phenomena, some of which may be viewed as
inherent (limited number of data observations, measurement problems, contro-
versy about operational definitions, new theories). Nevertheless, the continued
use of corroborative evaluation (i.e., only addressing goodness-of-fit or predicting
the stylized fact correctly) may inadvertently hinder the accumulation of evidence.
One suspects that there would be huge gains from a breakthrough in new standards
of methodology and practice for the profession.
Ideally, empirical modeling is a cumulative process whereby models continu-
ously become overtaken by new and more useful ones. By “useful,” we mean
models that are relatively invariant to changes elsewhere in the economy, i.e., they
contain autonomous parameters (see Haavelmo, 1944; Johansen, 1977; Aldrich,
1989; Hendry, 1995b). Models with a high degree of autonomy represent struc-
tural properties: they remain invariant to changes in economic policies and other