Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

952 Continuous-Time Stochastic Volatility Models


these facets is based on the ARCH/GARCH model introduced by Engle (1982) and
Bollerslev (1986).^2 In this approach latent volatility is modeled as a deterministic
function of past data available to the econometrician. The significant advantage
of the GARCH approach is that empirical estimation can be implemented easily
using quasi-maximum likelihood (QML) techniques.
A second class of time-varying volatility models are those termed “stochastic
volatility”: these are usually specified in continuous time and allow for a separate
error process to drive the dynamics of volatility. Continuous-time stochastic volatil-
ity models have become fashionable over recent years as they allow a rich variety
of alternative specifications. Moreover, stochastic volatility models offer closed
or semi-closed solutions in many important asset-pricing applications. Unfortu-
nately, the estimation of stochastic volatility models with discretely sampled data
is particularly difficult because the likelihood function is not usually available in a
tractable form. This intractability has fueled a significant research effort by finan-
cial econometricians. Continuous-time stochastic volatility models originate from
the mathematical finance and option-pricing literature.^3 As one of the fathers of
continuous-time finance, the late Fisher Black, remarked: “suppose we use the stan-
dard deviation of possible future returns on a stock as a measure of its volatility. Is it
reasonable to take that volatility as constant over time? I think not” (Black, 1976).
The “official” year of birth of continuous-time stochastic volatility models may
be taken to be 1987 as, in that year, Hull and White (1987), Johnson and Shanno
(1987), Scott (1987) and Wiggins (1987) all developed option-pricing models with
stochastic volatility. These models extended those of Black and Scholes (1973) and
Merton (1973) by allowing volatility to follow a separate diffusion process. Scott
(1987) and Wiggins (1987) are early attempts in estimating the parameters of the
model using a method of moments approach. In this chapter we provide a selective
review of some of the other popular methods that have been proposed over the
years for estimating continuous-time stochastic volatility models.^4
The need to estimate stochastic or time-varying volatility stemmed from the
desire to explain and reproduce some of the stylized facts that have been observed
in financial data:



  • Fat tails. Since the early studies of Fama (1963, 1965) and Mandelbrot (1963), it
    has been well documented that asset returns are leptokurtic and violate the
    assumption of normality. Continuous-time models such as Merton’s (1976)
    jump diffusion can generate non-normality and fat tails.

  • Volatility clustering. In most financial markets we can observe episodes of high
    volatility interspersed by episodes of low volatility, so that large returns tend
    to be followed by large returns and small returns tend to be followed by small
    returns, irrespective of sign. In fact, one of the reasons for the huge success of
    GARCH modeling is that it provides a direct link between time-varying volatil-
    ity, conditional heteroskedasticity and unconditional leptokurtosis. The implied
    clustering effect is depicted in Figure 19.1, which shows the daily returns of the
    Standard and Poor (S&P) 500 over the period 1990–2007.

Free download pdf