Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
Andrew M. Jones 599

Correlation between the two equations is introduced by assuming that the error
terms are drawn from a bivariate normal distribution. Although it is not clear from
the paper, it appears that the individual effects (uij)are assumed to be independent
of each other, which seems rather restrictive. Computation is based on a combina-
tion of Gauss–Hermite quadrature, to integrate over the time invariant individual
effect, and Gauss–Legendre quadrature, to integrate over the bivariate distribution
within each period.


12.5.3.2 Finite mixtures


Deb and Trivedi (1997, 2002), Deb and Holmes (2000), Jiménez–Martinet al.(2002)
and Sarma and Simpson (2006) estimate finite mixture models for count measures
of health care use, in which a Negbin distribution is assumed within each latent
class. Lourenço and Ferreira (2005) extend the application of the Negbin finite mix-
ture model to a truncated sample from the 2003–04 Europep survey for Portugal,
where data are only collected for those who visit health centers. This means that
the data are drawn from an endogenous sampling scheme and are truncated at zero,
and raises the question of whether the distribution of unobserved heterogeneity
should be defined over the whole population or only the truncated sample.
Jochmann and Leon-Gonzalez (2004) propose a specification that uses a semi-
parametric Bayesian approach, which can be seen as an extension of Deb and
Trivedi (1997). They start with a parametric “random coefficients” specification
of the Poisson model as a benchmark. In this model the random slopes (bi)are
assumed to be drawn from a multivariate normal distribution, so the conditional
mean function takes the form:


λit=exp(x′itβ+w′itbi+εit). (12.35)

The semiparametric element of the model is introduced by using a Dirichlet
process mixture for the prior on the random effects. This gives a mixture model with
a random number of components and extends the usual treatment of LCMs that
fix the number of components. The Dirichlet process specifies a base distribution,
in this case assumed to be normal, and a fixed number of mass points, in this case
set equal to 10. Then, new draws of the random effects are a mixture of draws from
the base distribution and draws from existing clusters of values. The end product is
a discrete distribution where the number of mass points is random. Estimation of
the model is done by MCMC, based on Gibbs sampling, with data augmentation
(the random effects and latent variables are treated as parameters to be estimated)
and incorporating a Metropolis–Hastings step where the Gibbs sampling cannot be
used. The MCMC algorithm was run for 30,000 iterations, discarding the first 5,000
for the burn-in period. The application uses data on the number of visits to the
doctor in the previous quarter from the 1997–2001 waves of the GSOEP. The aim
is to test for horizontal equity in the delivery of care by seeing whether non-need
factors play a significant role in explaining variation in utilization. This is tested
using Bayes factors and horizontal equity is not rejected with these data.
The two dominant strands in the recent literature, hurdle models and finite
mixture models, are brought together in the latent class hurdle model developed

Free download pdf