Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

748 Microeconometrics: Methods and Developments


estimation, are used for specification testing. For example, in linear regres-
sion ofyonx 1 , the hypothesis thatx 2 can be excluded as a regressor implies
E[(y−x′ 1 β 1 )|x 2 ]=0. This can be specified as a test ofH 0 :E[(y−x′ 1 β 1 )x 2 ]= 0.
Here it can be difficult to obtain̂V[m̂], though auxiliary regressions are available
to compute an asymptotically equivalent version ofMin the special case that̂θ
is the MLE. Examples of conditional moment tests, proposed by Newey (1985)
and Tauchen (1985), include the information matrix test of White (1982) and
chi-squared goodness-of-fit tests.
The Hausman test and OIR tests are routinely used in GMM applications. Con-
ditional moment tests are less commonly used, even though they are easy to
implement in likelihood settings, where they would seem especially useful due
to concerns of reliance on distributional assumptions. One reason is that the con-
venient auxiliary regressions used to compute them can have poor finite-sample
size properties, but this can be rectified by a bootstrap with asymptotic refine-
ment (see, for example, Horowitz, 1994). A second reason is the more practical
one that, especially with large samples, any model is quite likely to be rejected at
conventional 5% significance levels.
For model selection when models are nested the standard hypothesis testing
methods can be used. For model selection with non-nested models there is an
extensive literature that is not addressed here. A recent survey is provided by
Pesaran and Weeks (2001).


14.4.4 Bootstrap


Inference in microeconometrics is based on asymptotic results that provide only
an approximation given typical sample sizes. The bootstrap, introduced by Efron
(1979), provides an alternative approximation by Monte Carlo simulation.
The motivation of the bootstrap is to view the data in hand, or the fitted DGP, as
the population. Then drawBresamples from this population, and for each resample
compute a relevant statistic. The empirical distribution of the resultingBstatistics
is used to approximate the distribution of the original statistic.
The most common use of the bootstrap is as a way to calculate standard
errors. The dataw 1 ,...,wNare assumed to be i.i.d. The bootstrap standard error
procedure is:



  1. Do the followingBtimes:

    • Draw a bootstrap resamplew∗ 1 ,...,w∗Nby sampling with replacement from
      the original data (called a paired bootstrap).

    • Obtain estimatêθ∗ofθ, where for simplicityθis scalar.



  2. Use theBestimateŝθ 1 ∗,...,̂θB∗to approximate the distribution of̂θ. In particular,
    the bootstrap estimate of the standard error of̂θis:


ŝθ,Boot=


1
B− 1

∑B
b= 1 (

̂θ∗
b−
̂θ∗)^2 , (14.25)

wherêθ


=B−^1

∑B
b= 1 ̂θ


b. This is simply the standard deviation of̂θ


1 ,...,̂θ


B.
Free download pdf