Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

46 Methodology of Empirical Econometric Modeling


economics and its applications in particular cases. Finally, some courses require
students to undertake empirical work themselves, often replicating or evaluating
existing studies rather than novel research. Combinations of some or all of these
also happen.
If the objective is one where completing students are to be able to reliably tackle
a new application, then teaching applied econometrics becomes very demanding.
A wide range of skills and insights need to be conveyed, many of which concern
“auxiliary” issues such as data availability, its quality and its correspondence to
the target of the analysis, including frequency, seasonality, transformations, etc.;
institutions and policy agencies that impinge on the economic process; impor-
tant historical and non-economic contingencies that occurred; the specification
of the candidate list, their dynamics, exogeneity, functional forms and constancy
of possible models; and the use of software. When the first attempt fails on the
desired criteria, a revision process is needed, so difficulties of expanding searches
and sequentially correcting problems with a model must be confronted, all too
often leaving the student floundering.
A key job of an applied econometrician is to formulate the general model that
underpins the analysis, which includes the specification of all candidate variables
that might influence the “target” variables of interest in the modeling problem,
their general functional forms (e.g., logs), and putative exogeneity assumptions.
General economic reasoning plays a substantive part at this stage. Further, one
must collect and carefully check all the data series to be modeled, and investigate
their historical context. Finally, the easier part is using appropriate software to
congruently model the relevant series. Yet, many studies by “experts” remain clever
“detective exercises” in which a feel for the evidence helped point towards a viable
conclusion. The approach in Hendry and Nielsen (2007a), summarized in Hendry
and Nielsen (2007b), is to first prepare students to understand the elements of
likelihood theory, using likelihood ratio tests for inference and evaluation – testing
the assumptions for the validity of those inferences – leading to model selection
in the econometric theory part of the course. A sequence of increasingly realistic
theoretical models is developed from i.i.d. binary data through to cointegrated
equations with structural breaks. On the applied part of the course, we thoroughly
examine an agreed data set, and after teaching the relevant software, students can
rapidly move from simple models to general ones using automatic methods. Our
focus is on developing well-specified empirical models of interesting economic
issues. Given a new problem, students then have a structured approach to follow
in their investigations. We consider there has been a marked improvement in their
resulting empirical studies.
An example of my own approach is recorded in Hendry (1999): there may be
some “tacit knowledge” therein (and I hope there is value added), but most of
the above steps can be formalized without the need for an extensive apprentice-
ship. The next section focuses on comparing how well automatic model selection
does without any “prior” historical knowledge or empirical modeling experi-
ence. The results reported in section 1.7 took about 20 minutes of real time,
including the write-up: even granted that the data were pre-prepared and the log

Free download pdf