Michael P. Clements and David I. Harvey 187
simulation evidence suggests that the latter approach yields considerably better
finite sample size control.
Finally, Clark and McCracken (2001, 2005) also propose a new FE(2)-based
encompassing test for use with nested linear model forecasts, which is shown to
be more powerful than the correspondingMDM-type tests. The test statistic is
given by:
ENC-F=
ndˆ
n−^1
∑R+n+h− 1
t=R+h eˆ
2
2 t
.
Whenh=1 and the forecast errors are conditionally homoskedastic, the statistic
has the following limit distribution for 0<π<∞under the null:
ENC-F⇒ 1 ,
and critical values from this distribution are provided by Clark and McCracken
(2000, 2001) for the fixed, recursive and rolling estimation schemes. Whenh>1,
theENC-Fstatistic does not have a pivotal asymptotic distribution, even when
k 2 =1; in this more general case of multi-step prediction, therefore, critical values
must be obtained by bootstrapping. Whenπ=0,ENC-Fis degenerate, and needs
to be rescaled by(R/n)^1 /^2 to obtain a limit distribution under the null.
4.5 Conditional tests of forecast encompassing
Hitherto, we have considered tests of forecast encompassing that are based on
the notion of unconditional expected loss. Giacomini and White (2006) present a
general framework for out-of-sample predictive ability testing which is character-
ized by the formulation of tests (such as tests for forecast encompassing) based on
conditional expected loss. Tests of forecast encompassing based on unconditional
expected loss indicate whetherf 1 tencompassesf 2 ton average, i.e., over the whole
sample, whereas a conditional evaluation would indicate thatf 1 tencompassesf 2 t
if it were not possible to predict whether the combination off 1 tandf 2 twould
outperformf 1 tbased on information known att−1. The approach of Giacomini
and White (2006) also differs from the standard approach to testing for predic-
tive ability in that it compares forecastingmethodsrather than forecastingmodels.
Following the seminal contribution of West (1996), the underlying aim is to com-
pare the forecast performance of the modelsin population. Although forecasts are
derived from models with estimated parameters, hypotheses concerning predic-
tive ability are framed in terms of forecasts based on the population values of the
model parameters, necessitating an allowance for the impact of estimation uncer-
tainty, as discussed in section 4.3. Instead, the approach of Giacomini and White
(2006) compares the forecast performance of the methods, where the method com-
prises the method of estimation and the number of observations to include in
the estimation window, in addition to the specification of the model. Estimation
uncertainty is thus a key feature of the forecasting method and affects forecast
performance.