Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
Andrew M. Jones 587

One of the advantages of using panel data is the possibility of accounting for the
correlation amongst the effects and the explanatory variables. To allow for this
correlation, Chamberlain (1984) suggested using a random effects approach and
specifying a distribution for the individual effects conditional on the values of the
explanatory variables at each wave of the panel. This specification may contain
polynomial terms and interactions in thexs as well. Combining this with assump-
tions about the conditional expectation of the initial and final values of the latent
variable allows the dynamic model to be solved out to give linear reduced forms
for the latent variables at each wave of the panel. Estimates of the reduced forms
will be sensitive to assumptions about the distribution of the error terms, the lin-
earity of the expected value, and the conditional mean independence assumption.
However, these hypotheses can be checked by specification tests at the level of
the reduced form, which is easier to do than testing the dynamic specification.
At the second stage, on the basis of the reduced form coefficients, the parameters
of the underlying dynamic structural model can be derived using various estima-
tors. The simplest is to apply the within-groups transformation to the dynamic
model after replacing the latent variables by their predicted counterparts (Bover
and Arellano, 1997). This two-step within-groups procedure is simple to apply, but
provides inefficient parameter estimates. Chamberlain (1984) proposed a fully effi-
cient minimum distance (MD) estimator. Instead of using Chamberlain’s approach,
Bover and Arellano (1997) propose a three-step within-groups GMM, which also
facilitates tests of the over identifying restrictions.


12.4.2 Numerical integration and classical simulation-based inference


In panel data specifications, unobserved heterogeneity is often modeled as a ran-
dom effect and “integrated out” of the log-likelihood function. Monte Carlo
simulation techniques can be used to deal with the computational intractabil-
ity of nonlinear models, such as panel and multinomial probit models. Popular
methods of simulation-based inference include classical maximum simulated like-
lihood (MSL) estimation, and Bayesian MCMC estimation. This section introduces
the classical approach (for a review of the methods and applications to health
economics, see Contoyanniset al.,2004a).
Numerical integration by quadrature works well with low dimensions, but
computational problems arise with higher dimensions. Instead, Monte Carlo sim-
ulation can be used to approximate integrals that are numerically intractable.
This includes numerous models derived from the multivariate normal distribu-
tion. Simulation approaches use pseudo-random draws of the evaluation points
and computational cost rises less rapidly than with quadrature.
The principle behind simulation-based estimation is to replace a population
value by a sample analogue. This means that laws of large numbers and central
limit theorems can be used to derive the statistical properties of the estimators.
The basic problem is to evaluate an integral of the form:



u

[h(u)]dF(u)=Eu[h(u)], (12.19)
Free download pdf