Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
Stephen G. Hall and James Mitchell 201

to bring together current wisdom and indicates areas where more research would
be welcome. Our discussion also draws out, particularly in section 5.3, which has
much relevance for how forecasting is conducted in practice by professional fore-
casters, the distinction between model-based and subjective density forecasts, and
we also consider their reconciliation. We do not discuss presentation issues, which
we defer to Tay and Wallis (2000) and Wallis (2007).
We confine attention to univariate density forecasts. There is a smaller but grow-
ing literature on multivariate density forecasting, some of it drawing on recent
applications of copula functions in economics (e.g., Patton, 2006), where the cop-
ula characterizes the dependence between the density forecasts. Diebold, Gunther
and Tay (1998) and Diebold, Hahn and Tay (1999) show that the principle behind
the evaluation of univariate density forecasts, discussed in section 5.4 below, gener-
alizes to the multivariate case (see also Clements and Smith, 2000, 2002). Adolfson,
Linde and Villani (2007) use a multivariate scoring rule to compare density forecasts
of the Euro-area from vector autoregressive (VAR) and dynamic stochastic general
equilibrium (DSGE) models. Barrell, Hall and Hurst (2006) and Mitchell (2007a)
consider how bivariate density forecasts for inflation and output growth facilitate
the evaluation of policy rules simultaneously with respect to their performance
against the inflation target and any output growth target that the policy makers
may also have in mind.


5.2 The importance of density forecasts


Periodically, and perhaps especially at times of heightened uncertainty, one hears
the argument that it is time to jettison economic forecasts given their unreliability.
But, as discussed above, in fact we should not be surprised by the unreliability of
point forecasts – indeed, the unreliability of point forecasts is itself a useful indi-
cation of uncertainty. In a loose sense, ignoring for now moments higher than
the second, what is important is the ability of the point forecast, relative to its
variance, to track the outturn. More generally, it is important to provide a quan-
titative indication of the uncertainty associated with a point forecast, along with
the balance of risks (skewness) on the upside and downside and the probability of
extreme events (fat tails or kurtosis). This is achieved by publishing a density fore-
cast. Importantly, the density forecast gives any users of the forecast an indication,
in advance, of the health risks associated with its use.
Although it is a truism to say that density forecasts cannot capture unknowable
uncertainty (Knightian uncertainty) and only capture “risk” (knowable uncer-
tainty), the distinction introduced by Knight (1921), these “risk” assessments can
be evaluatedex post. Indeed, they should be assessed on a regular and ongoing
basis. There is no reason to expect, especially at times of structural change, that the
density forecast correctly captures uncertainty. Forecasters’ statements about the
underlying uncertainty may be, and indeed often are, unreliable. When forecasters
expect them to be unreliable the variance of the conditional variance forecast need
not equal zero. Therefore evaluation tests, reviewed in section 5.4, have been devel-
oped to test, essentially, whether on average over a given sample a forecaster’s

Free download pdf