David F. Hendry 15
attribute. There is simply no case that we understand no more than (say) Aristotle,
or Kepler, etc.: lights work, computers run, planes fly. Moreover, it is possible to
“predict” with considerable accuracy what changes to chips will speed up calcula-
tions, and what putative aircraft will not fly, which are inferences beyond any local
set of experiments and evidence, are not purely inductive, and can be generalized,
though doubtless with limits. Science seeks progress, whether by new experiments,
new instruments or observations, new theories or refutations of extant ones. We
now know that ulcers are caused by helicobacter pylori bacteria – not by stress –
so cheap, painless antibiotics can cure ulcers, replacing life-threatening operations
or expensive drugs. The path that led to that idea is irrelevant to its validity, and
could be serendipity, careful testing, or a theory prediction, whereas stringent eval-
uation and replicability are crucial. In turn, such advances can lead to radical new
thinking, albeit that initially they often face considerable opposition – sometimes
even derision: scientists are humans, and rarely change their views until evidence
overwhelms. Even then, theories are usually not rejected by evidence, but rather
are replaced when “better” theories develop that explain more, especially if they
account for previous anomalies.
Statistical analyses become essential in observational sciences, such as astron-
omy and economics, where “field” experiments are almost impossible to control.
Then theory and modeling difficulties both explode and certainty declines, espe-
cially when behavioral change is possible: despite rendering previous analyses less
than fully relevant to new settings, progress remains the key. It is widely recognized
that special factors may intrude on a theory-based model (e.g., changes in credit
rationing, nationalization, deregulation, price controls, wars, etc.), but less recog-
nized that such special factors can dominate when accounting for data variability.
Morgan (1990), Spanos (1995), Hendry (1995b) and Hendry and Mizon (2000)
discuss some of the problems involved in testing theories using observational
data.
Economists have not formally justified the principle of deriving empirical mod-
els from theory – most seem to assume it is obvious – so a substantial proportion of
empirical econometric evidence is “high level” in that its credibility depends on the
prior credibility of the theoretical model from which it was derived. Given any con-
jecture, we can usually test its empirical validity, thereby sustaining a destructive
approach (see, e.g., Popper, 1963, on conjectures and refutations), although issues
of inference from small and heterogeneous data samples complicate the analysis.
If a theory implementation is simply discarded when it is rejected, the process fails
to incorporate learning from the evidence. Conversely, if it is not discarded, some
or all of the empirical model, the measurements and the theory must be revised,
although there is no unique or even structured way of doing so. It is anon sequiturto
assume that the particular alternative considered is true when the null is rejected. A
progressive research approach of successively encompassing congruent (see section
1.4.2.4) models consolidated by empirically-relevant theories offers one possibility.
Alternative approaches to “macroeconomic theory” abound in the literature:
Samuelson (1947) initiated a tradition of models based on constrained optimiza-
tion, implemented by Hall (1978) as Euler equations; Kydland and Prescott (1990,