Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

786 Computational Considerations in Microeconometrics


available. An example is the method that uses quasi-random draws based on Halton
sequences described in Bhat (2001) and Train (2003).^6 Halton sequences have two
desirable properties vis-á-vis the standard pseudo-random draws. First, they are
designed to give more even coverage over the domain of the mixing distribution.
Second, the simulated probabilities are negatively correlated over observations.
This negative correlation reduces the variance in the simulated likelihood func-
tion. Under suitable regularity conditions (see Bhat, 2001), the integration error
using pseudo-random sequences is in the order ofn−^1 as compared to pseudo-


random sequences where the convergence rate isn−^1 /^2. An example of an MSL
estimator using Halton quasi-random draws is given in section 15.6.


15.3.4 Resampling methods


Empirical microeconometrics depends heavily on asymptotic theory to justify
point and interval estimation procedures because exact results are generally not
available. Sometimes an investigator may be interested in some function of esti-
mates and data for which even the asymptotic result may not be available. In some
cases only an approximation to the asymptotic approximation may be available.
The availability of an asymptotic approximation is no guarantee that it provides
a good approximation to the sampling distribution of an estimator. Motivated by
these difficulties, econometricians increasingly use computer-intensive resampling
methods to obtain estimates of the moments of the asymptotic distribution. The
bootstrap and jackknife are two leading examples, but only the first is sketched
here.
There exists a wide range of bootstrap methods (see, e.g., Davidson and
MacKinnon, 2006). The simplest bootstrap methods can support statistical infer-
ence when conventional methods, such as variance estimation, are difficult to
implement, either because a formula is not available or because it is computation-
ally intractable. Another, more complicated, bootstrap attempts to provide asymp-
totic refinements that can lead to an improvement over the asymptotic results.
Applied researchers are most often interested in the first, simpler, application of
the bootstrap.
The basic idea behind the bootstrap is to approximate the distribution of a
statistic by a simulation in which one samples from the empirical or the fitted
distribution of the data. That is, one uses a given sample repeatedly to derive the
sampling properties of statistics of interest. Bootstrap methods rely on asymptotic
theory for their validity.
Consider, in the context of the regression ofyi|xi,i=1,...,n, the problem of
inference on a parameterθ,θ=φ(β),whereφ(β)is a continuous function of the
regression parametersβ. The bootstrap algorithm for the variance ofθis explained
in algorithm 15.3.4.0.1.
A computationally simpler alternative to the bootstrap variance is an estimate
obtained by the so-called delta method, based on a first-order Taylor approximation
ofφ(β). In some applications this method can yield very poor results, whereas the
bootstrap may be more robust.

Free download pdf