Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

1160 The Methods of Growth Econometrics


unless they have been accompanied by systematic investigation of the data, includ-
ing the sensitivity of the results to outlying observations. There are many reasons
why some observations may be unrepresentative. It is possible for variables to be
measured with error for that particular region or country. Alternatively, the model
specified by the researcher may omit a relevant consideration, and so a group of
observations will act as outliers. It is inherent in least squares estimators that they
are highly sensitive to unrepresentative observations, and the dangers of using
OLS were forcibly expressed by Swartz and Welsch (1986, p. 171): “In a world of
fat-tailed or asymmetric error distributions, data errors, and imperfectly specified
models, it is just those data in which we have the least faith that often exert the
most influence on the OLS estimates.”
Some researchers respond to this concern by using leverage measures or single-
case diagnostics such as Cook’s distance statistic. There are well-known problems
with these approaches, because where more than one outlier is present, the extent
of the influence of one observation can easily be hidden by the presence of another
(the “masking” effect). By far the best response is to use a more robust estimator,
such as least trimmed squares, at least as a preliminary way of investigating the
data.^25 These issues are discussed in more detail in Temple (1998, 2000b).


24.7.2 Measurement error


We now turn to a more general discussion of measurement error. It is clear that
measurement errors are likely to be pervasive, especially in data that relate to devel-
oping countries, yet relatively few empirical studies of growth consider the impact
of measurement error in any detail. This rather casual approach often appeals to
the best-known statistical result, which applies to a bivariate model where the inde-
pendent variable is measured with error.^26 The estimate of the slope coefficient will
be biased towards zero, even in large samples, because measurement error induces
a covariance between the observable form of the regressor and the error term. This
attenuation bias is well known, but sometimes misleads researchers into suggesting
that measurement error will only mask effects, a claim that is not true in general.
When there are multiple explanatory variables, but only one is measured with
error, then typically all the parameter estimates will be biased. Some parameter
estimates may be biased away from zero and, although the direction of the bias
can be estimated consistently, this is rarely done. When several variables are mea-
sured with error, the assumption that measurement error only hides effects is even
less defensible.
Where measurement error is present, the coefficients are typically not identified
unless other information is used. The most popular solution is to use instrumental
variables, if a separate instrument can be found which is likely to be independent
of the measurement error. A more complex solution, which does not need an
additional variable, is to exploit higher-order sample moments to construct IV
estimators, as in Dagenais and Dagenais (1997) and Arcand and Dagenais (2005).
The reliability of these procedures in small samples is uncertain, since the use of
higher-order moments could make them sensitive to outliers.

Free download pdf