Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
David F. Hendry 27


  • Data partitiondetermines which variables to include and which to omit in the
    modelspecification, a decision that is dependent on the purpose of the modeling
    exercise, but is fundamental to the success of the empirical model.

  • Marginalizingwith respect tovtis without loss ifX^1 Tissufficientforθ; and
    marginalizing with respect toVt^1 − 1 is without loss if it isGranger non-causalfor
    xtand the conditional-marginal parameters satisfy acut.

  • Sequential factorizationinduces no loss as^ tfrom (1.5) is aninnovationrelative
    toR^1 t− 1.

  • Parameter constancyover time is fundamental to most uses of a model, and
    invariance(constancy across interventions to the marginal process) is essential
    for policy.


• Lag truncationleads to no loss if (^) tremains aninnovationagainstX^1 t− 1.



  • Integrated datacan be reduced toI(0) bycointegrationanddifferencing, sustain-
    ing a moreparsimoniousrepresentation, and supportingconventional inference.

  • Functional formspecification may or may not entail a reduction, and does
    not when the two densities are equivalent (e.g., logs of log-normal variables are
    normal).

  • Conditional factorizationsentail no loss of information whenztisweakly
    exogenousforθ, addressed in section 1.4.5.

  • Simultaneitycan allow one toparsimoniouslycapturejoint dependence.


Testing reductions. Second, reductions are testable against any preceding, less
reduced, distributions. Indeed, there is an accompanying taxonomy of evaluation
information that seeks to ascertain the statistical significance of the losses imposed
by the various reductions. This leads to six major null hypotheses about the final

model’s specification: homoskedastic innovations{ (^) t};ztweakly exogenous for
θ; constant, invariantθ; data-admissible formulations on accurate observations;
theory consistent, identifiable structures; encompassing rival models. While this
exhausts the nulls to test, there are many alternatives to each. Models which
satisfy the first and third are well specified on the available information, and if
satisfying the first three are said to be (empirically) congruent. One model (parsimo-
niously) variance dominates another if it has a smaller unexplained variance (and
no more parameters): the notion of one model explaining the results of other mod-
els extends variance dominance to account for all other parameters. The principle
of encompassing was formalized in Hendry and Richard (1982), and the theory of
testing developed by Mizon (1984) and Mizon and Richard (1986) (see Hendry and
Richard, 1989, and Hendry, Marcellino, and Mizon, 2008, for surveys). An admis-
sible, theory-consistent, encompassing, congruent model satisfies all six criteria.
Choosing the Haavelmo distribution. Third, knowledge of the LDGP is the “opti-
mum” one can achieve for the given set of variables. Different choices of{rt},
and hence the Haavelmo distribution, will lead to different LDGPs with more or
less constancy and congruence with the available evidence. If (1.7) were indeed
the LDGP, then model selection could target its variables. The congruence of

Free download pdf