Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
Stephen G. Hall and James Mitchell 209

The NIESR began publishing regular density forecasts in 1996, although it had
been publishing a mean absolute error for its forecasts of inflation since the second
quarter of 1992. The NIESR, in contrast to the Bank, imposes a normal distribution
around their point forecast, with the variance determined on the basis of past
forecast errors. The window used to calculate this average error turns out to be
quite important – there is uncertainty about the variance used to quantify the
degree of uncertainty inherent in the fan chart (see Mitchell, 2005). Until 2002 the
NIESR used a sample which started in 1982 to estimate the variance. From 2002
onwards it used a sample which began in 1993. Given the general fall in volatility
of most economic series in the UK, this change brought about a considerable fall
in the estimated size of the variance of its density forecasts, and hence we can see
that the choice of this window can be very important in achieving a good forecast.
Mitchell (2005) has found that a break in the unconditional variance of the
NIESR’s forecast errors around 1993–94 could have been detected via recursive
analysis of these forecast errors towards the end of 1996, rather than in 2002. It is
therefore important to monitor historical forecast errors regularly, using statistical
tests for structural breaks at an unknown point, to help select a period of history
which is informative about the future. Stochastic simulation has been discussed
as an alternative to historical errors for measuring the uncertainty associated with
the forecast. This might be expected to deliver a better measure of uncertainty if a
new policy regime (such as a new target for inflation) is adopted.
All of these forecasts can be viewed as being, basically, subjective in nature as
they are not the direct result of a formal model. Even in this case the techniques
discussed below to evaluate a forecast formally may still be applied, and we argue
that an important stage in constructing even a subjective density forecast is an
evaluation of the track record of those forecasts.


5.3.4 Combining model-based and subjective density forecasts


In the point forecasting area there has long been a common practice of combin-
ing model-based information and subjective judgment. Very few real forecasts are
purely the result of a model and, similarly, most forecasters would use a formal
model in one form or another to structure the forecasting procedure. It would seem
reasonable, therefore, that when we come to consider density forecasts we would
similarly want to consider a formal mixture of model and subjective information.
One approach would be simply to form two quite separate forecasts, one subjective
and one model-based, and to combine them. We discuss density forecast combina-
tions in detail in section 5.5 below, so we will not discuss this possibility here. An
alternative, formal means of combining model-based density forecasts with judg-
ment is to adopt a Bayesian approach, with the non-data information summarized
by the “prior” (see Sims and Zha, 1998). Waggoner and Zha (1999) consider how
to use Bayesian methods to compute density forecasts for conditional forecasts in
VAR models, which allow one to impose conditions on the likely future values of
endogenous variables. Del Negro and Schorfheide (2004) use a DSGE model as a
prior for a VAR and find this improves point forecasting performance as measured
by RMSE. Clark and McCracken (2008) essentially impose a hard informative prior

Free download pdf