Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
David F. Hendry 41

shall end in doubts; but if he will be content to begin with doubts he shall end in
certainties.”
Conversely, the list at the beginning of section 1.4 makes it clear that “model
uncertainty” comprises much more than whether one selected the “correct model”
from some set of candidate variables that nested the LDGP. If, say, 1,000 possibly
lagged, nonlinear functions of a set of candidate exogenous variables in a model
with many breaks are checked for relevance at a significance level of 0.1%, and
all are indeed irrelevant, then on averageonewill be retained adventitiously, so
uncertainty is greatly reduced by eliminating about 999 potential influences. The
entire point of model selection is to reduce some of the uncertainties about the
many aspects involved in model specification, and the cost for doing so is a “local
increase” in uncertainty as to precisely which influences should be included and
which excluded around the margin of significance. Thus, embedding the claimed
theory in a more general specification that is congruent with all the available evi-
dence offers a chance to both utilize the best available theory insights and learn
from the empirical evidence. Since such embedding can increase the initial model
size to a scale where a human has intellectual difficulty handling the required
reductions, we next consider computerized, or automatic, methods for model
selection.


1.5.1 Automatic model selection


“Does – the one – that wins – get the crown?” she asked, as well as she
could, for the long run was putting her quite out of breath.
“Dear me, no!” said the King. “What an idea!” (Alice to the White King
in Lewis Carroll, 1899)

The many alternatives now available include, but are not restricted to, Phillips
(1994, 1995, 1996), Tibshirani (1996), Hoover and Perez (1999, 2004), Hendry
and Krolzig (1999, 2001), White (2000), Krolzig (2003), Kurcewicz and Mycielski
(2003), Demiralp and Hoover (2003), and Perez-Amaralet al.(2003); also see the
special issue on model selection edited by Haldrup, van Dijk and Hendry (2003)
(the references cited therein provide bibliographic perspective on this huge litera-
ture). Complaints about model selection have a long pedigree, from Keynes (1939)
about “data-based modeling” and Koopmans (1947) on “measurement without
theory,” through “pre-test biases” from test-based selection in Judge and Bock
(1978); “repeated testing” inducing adventitious significance in Leamer (1978,
1983) and Lovell (1983) criticizing selection rules seeking “significance,” to Pagan
(1987) on the potential “path dependence of any selection”; Hendry, Leamer and
Poirier (1990) debating “arbitrary significance levels”; Chatfield (1995) criticizing
“ignoring selection effects” as misrepresenting uncertainty, and Faust and White-
man (1997) on “lack of identification,” but most have now been rebutted (see,
e.g., Hendry, 2000a). Concerning Keynes’ comment quoted above, not only should
everyone get the same answer from an automatic algorithm applied to the same
GUM using the same selection criteria, investigators with different GUMs, which
differed only by irrelevant variables, could also end with the same model.

Free download pdf