David T. Jacho-Chávez and Pravin K. Trivedi 813
Train (2003, Ch. 9) gives examples in which 100 Halton draws have efficiency that
exceeds that of 1,000 random draws. However, evenS=100 may not be high
enough. The penalty for settingStoo low is potential bias of the estimator, but
having to determine the appropriate value ofSby trial and error is a disadvan-
tage. One expects longer computational time when additional regressors appear
in the model, as would be the case with a more realistic model that adds further
sociodemographic and health status factors. In this example, one factor-loading
coefficient is estimated to be significantly different from zero, which confirms that
endogeneity of the HMO variable is an empirically important consideration.
15.7 Concluding remarks
In applied microeconometrics computational matters have always featured promi-
nently, and computational convenience has often been a criterion for choosing
a methodology. Historically, advances in the quality and scope of research have
moved in tandem with computational advances. During the 1960s and 1970s,
the computational treatment of sample selection, discrete choice, nonlinearities,
and limited dependent variable models remained central to the research agenda
in microeconometrics. For example, in his 1976 survey paper on quantal choice
analysis, McFadden mentioned computation of the multinomial probit model as
an important unsolved problem, and remarked that “It would be particularly use-
ful to achieve a computational breakthrough on the multinomial normal model.”
This remained an important research topic for close to two decades. Although
such topics have not disappeared altogether, their importance is now smaller. Tak-
ing advantage of raw computing power, simulation-based estimation and inference
methods based on resampling have emerged as feasible and practical approaches to
many computational problems. Although this chapter did not survey the Bayesian
approaches, the advances in this area have also been revolutionary; indeed, there
are numerous cases in which the Bayesian MCMC computational approaches have
proved more attractive than their frequentist counterparts. However, a major per-
sistent computational challenge remains. It arises from the goal of constructing
empirical models that can address important and detailed issues of public pol-
icy without resorting to excessive use of parametric restrictions. Such models are
inherently structural, dynamic and high dimensional, and they often attempt to
accommodate the heterogeneity in tastes, constraints and objectives of decision
makers. These models face both the conceptual problems of identification and
computational problems of implementation. It seems safe to predict that such
challenges will remain with us for the foreseeable future.
Notes
- Notable exceptions are source codes written in GAUSS, MATLAB and S-PLUSthat can be
often interpreted by OxGAUSS, Octave and R respectively. The latter are non-proprietary
languages.