David F. Hendry 29
risky and theory dependent. The risk is that a major discrepancy is not detected,
leading to a poor description of the underlying agents’ behavior: we addressed
the issue of “ceteris paribus” in section 1.4.1.1, but if economies are inherently
wide-sense non-stationary, then other things will not stay constant. When theories
lack precise formulations of lag lengths, functional dependencies, other potential
determinants, breaks, and non-economic factors of importance, such difficulties
seem all too likely. The problem with theory dependence is that since no economic
analysis has yet proved immutable, the empirical results will be discarded when the
theory is altered, so there is no progressive knowledge accumulation. This is the real
reason that Summers (1991) finds little contribution from empirical econometrics
- it was not really allowed to make one, being restricted to providing empirical
cloth for a pre-designed framework.
Partial use of economic theory. Second, a partial use of economic theory often leads
to pre-specified moment conditions linking variables,xt, and parameters,φ, usu-
ally being zero for the “true” value of the parameter,φ 0 , in the form (sometimes
without conditioning):
E
[
h
(
xt,φ 0 |X^1 t− 1
)]
= 0 ∀t, (1.10)
enabling GMM estimation ofφ(see, e.g., Smith, 2007) (also, Smith, 1992, develops
non-nested tests applicable after GMM). Equally often, inference has to be based
on heteroskedastic and autocorrelation-consistent covariance (HAC) matrices (see
White, 1980a; Andrews, 1991), which assume that the residuals reflect precisely
those problems in the errors. Unfortunately, residuals can be heteroskedastic and
autocorrelated for many other reasons, including unmodeled breaks, measurement
errors, incorrect dynamics, omitted variables, or an inappropriate functional form
inter alia, most of which would invalidate the estimates derived from (1.10), and
possibly refute the underlying theory. Thus, rigorous testing against that range
of hypotheses would seem necessary, leading to three important difficulties. First,
unless a joint test is used, non-rejection of each may occur when there are several
failures. Second, if any test rejects at the chosen significance level (controlling
for the number of tests undertaken), the validity of all the other tests is cast into
doubt. Third, if rejection does occur, it remains anon sequiturto assume that the
hypothesis which was rejected was the source of the failure, so model revision may
require a theory revamp. Again, there seem to be distinct advantages to beginning
with general formulations that can be simplified when evidence permits, subject
to maintaining identifiability – which can also be a problem with GMM (section
1.4.7 discusses identification).
Economic theory guidelines. Finally, seeking a congruent model of the LDGP based
on economic theory guidelines by embedding the theory-based model in a more
general GUM for the set of candidate variables, with a range of possible spec-
ifications of lags, functional forms, breaks, etc., offers many advantages, not
least avoiding restrictive assumptions dependent on hope rather than evidence.
Such a general-to-specific (Gets) approach can be demanding, and while it can