202 Recent Developments in Density Forecasting
assessment of “risk” was correct. When it is correct their “risk” forecast might be
said to have captured “true” uncertainty.
These sort of historical evaluations of fan charts, complementing the traditional
and widespread practice of evaluating the trackrecord of point forecasts, are begin-
ning to be carried out routinely by forecasters (for an appraisal of UK inflation
density forecasts by the Bank of England and/or the National Institute of Eco-
nomic and Social Research (NIESR), see Clements, 2004; Wallis, 2004; Mitchell,
2005; Elderet al., 2005). The results provide an indication as to whether, albeit
historically, a series of fan charts was reliable. Just as measures of point forecast
accuracy indicate, to a degree, our confidence in point forecasts, these tests provide
an indication of our confidence in fan charts.
5.2.1 Forecasting under general loss functions
What really matters is how forecasts affect decisions. The “better” forecasts are
those that deliver “better” decisions. On this basis it is argued that the appropriate
way of evaluating forecasts is not to use some arbitrary statistical loss function, but
the appropriate economic loss function (see Granger, 1969; Granger and Pesaran,
2000). Only when the forecast user has a symmetric, quadratic loss function, and
the constraints (if relevant) are linear, is it correct to focus on the point forecast
alone. This is what the textbook’s call “certainty equivalence” (for further discus-
sion and a proof, see Ljungqvist and Sargent, 2000, pp. 57–9). In the more general
case, the degree of uncertainty matters. Publishing a point forecast alone is not
sufficient; users are not indifferent to the degree of uncertainty about the point
forecast.^1 They will not then make decisions as if they were certain. Uncertainty is
expected to attenuate their response or reaction to the point forecast (see Brainard,
1967). For more recent discussion in the context of policy makers’ reactions to
real-time output gap estimates, which are known to be unreliable (Orphanides and
van Norden, 2002), although this is not surprising (as discussed above: see also
Mitchell, 2007b), see Swanson (2004).
The importance of publishing density forecasts then follows from the fact that
we tend, in reality, not to know users’ loss functions. Central banks do not quantify,
explicitly at least, their loss functions, but we should not expect these (unknown
to us) functions to be quadratic. For example, we should expect the range of uncer-
tainty to matter to the Federal Reserve since it probably does not care equally about
inflation above and below the zero bound. The central bank has then to be what
Svensson (2001) calls adistributionforecast targeter.
When the forecast user’s loss function is asymmetric, such that positive and
negative forecasting errors have differing costs, the user’s “optimal” forecast need
not equal the conditional mean (e.g., Zellner, 1986; see Pesaran and Weale, 2006,
for a survey). Working out the optimal forecast can be complex, but if it is assumed
that the conditional distribution ofyt|t−his normal such thatyt|t−h∼
N(E(yt|t−h),V(yt|t−h))and the loss function is modeled via the Linex loss
function, an analytical solution can be derived. Under these conditions the optimal
or minimum loss point forecast,̂yt|t−h, is no longer equal to the conditional mean