Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

222 Recent Developments in Density Forecasting


the mean square error between the c.d.f. of the density forecast and the true c.d.f.,
integrated out over different quantiles of the c.d.f. (Corradi and Swanson). Rather
than relying on thepit’s or the logarithmic score, in both cases they estimate the
true density or c.d.f. empirically.ytis then required to be strictly stationary, an
assumption often not supported for economic time-series.
(5.36) is anunconditionaltest for equal forecast accuracy (see Giacomini and
White, 2006 (GW)). GW have developed more generalconditionaltests. These test
which forecast will be more accurate at a future date, rather than, as with the
unconditional tests, testing which forecast was more accurate “on average.” One
could, for example, then recursively select at timetthe best forecasting method for
t+1. Conditional tests can be straightforwardly implemented in our framework.
The null hypothesis of equal conditional forecast accuracy (for one-step-ahead fore-
casts) amounts to testingE(dt|t− 1 |h∗t− 1 )=E(h∗t− 1 dt|t− 1 )=0(t=2, 3,...), where


h∗tis a vector of “test functions” which we set equal toh∗t− 1 =(1,dt− 1 |t− 2 )′. The
GW test statisticGWTcan be computed as the Wald statistic:


GWT=T

(
T−^1

∑T
t= 2 h


t− 1 dt|t− 1

)′
̂−T^1

(
T−^1

∑T
t= 2 h


t− 1 dt|t− 1

)
, (5.39)

wherêTis a consistent estimator for the asymptotic variance ofh∗t− 1 dt|t− 1 and


GWT →d χ 22. GW note that a robustHACestimator for this variance could be
employed, as with DM-type tests, but they also explain that the sample variance is
a consistent estimator when one exploits the fact that the null hypothesis implies
{
h∗t− 1 ,dt|t− 1


}T
t= 2
is a martingale difference sequence. GW argue that this has the

advantage of allowing the data


{
yt

}
to be heterogeneous and characterized by
arbitrary structural breaks at unknown points. Their test is also valid for nested
models.


5.5 The combination of density forecasts


Rather than select a single “best” forecast it can be felicitous to combine competing
forecasts. This follows from appreciation of the fact that, although one model may
be “better” than the others, we may not select it with probability one; we may not
be sure that it is the best forecast. Therefore, if we considered this single forecast
alone, we would be overstating its precision. We may better approximate the truth,
and account for the uncertainty in model selection, by combining forecasts. Fore-
cast combination also provides a means of reconciling subjective and model-based
densities as discussed above (see also Osterholm, 2006).
Indeed, it is well recognized both theoretically and empirically that combining
competing individual point forecasts of the same event can deliver more accu-
rate forecasts, in the sense of a lower RMSE (see Bates and Granger, 1969; Stock
and Watson, 2004; Timmermann, 2006). The success of combination follows from
the fact that individual forecasts may be based on misspecified models, poor esti-
mation or non-stationarities. Moreover, recent work (e.g., Hendry and Clements,

Free download pdf