210 Recent Developments in Density Forecasting
on the steady-state of the VAR model by detrending, in particular inflation, prior
to forecasting. Villani (2005) proposes methods to impose an informative prior on
the steady-state, in particular the unconditional means of the model, and hence on
the long-run forecasts. The priors push the long-run forecasts towards the chosen
steady-state, say trend or target inflation.
We will, however, discuss a recently proposed technique which allows a model’s
density forecast to be altered at a second step in the light of subjective or “off-
model” information. This proposal stems from recent work by Robertson, Tallman
and Whiteman (2005), which is based on earlier work by Stutzer (1996) and
Kitamura and Stutzer (1997).
We are interested in a density forecast for anM-dimensional vector of variablesY.
In general, if we are attempting to derive a model-based density forecast based on a
possibly nonlinear model it will not be possible to derive this density analytically.
However, given the long history of stochastic simulation analysis referenced above,
it is usually possible to approximate this density. Thus, assume we have derived
a sample ofNdraws, denoted {Yi}, (i=1,...,N), and that we also have a set of
weights {πi}, (i=1,...,N). It is then possible to approximate the model’s density
function by simply weighting together a transformation of the sample of draws. If
we have a random sample from the predictive density, the weights are:
πi= 1 /N,∀i. (5.16)
The mean of the density forecast is:
Y=
∑N
i= 1 πiYi, (5.17)
and so on for any other moments.
This, therefore, provides a means to approximate the density forecast of the
model. Now assume that, in addition to the model, we have some extra information
which we wish to incorporate into the density forecast. We may think of this as a
set of moment conditions which we wish the final density forecast to obey. In a
very simple example we might wish to locate the mean of a variable at a particular
point or we might wish to impose a certain degree of skewness. Suppose we wished
the mean of the vector of variables to take some particular set of values,g.In
general, of course, this will not coincide withYas:
∑N
i= 1 πiYi=g. (5.18)
The idea then is to create a new set of weightsπi∗such that this restriction holds
exactly. Of course, forNsufficiently large, there will generally be an infinite number
of sets of weights which would satisfy this restriction, so the idea is to choose a set of
weights which satisfy the restriction while, at the same time, remaining as close as
possible to the original weights. This, of course, requires a definition of closeness
and Robertsonet al.(2005) establish that, under a set of regularity conditions,
the appropriate measure of closeness is the Kullback–Leibler information criterion