Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1

780 Computational Considerations in Microeconometrics


dynamically loadedby the original program to speed up overall execution. Further-
more, since some interpreted languages are written in C, it is not surprising that
native C code can make use of libraries in local installations of these interpreted
languages’ compilers.^4


15.2.3 Parallelization


Standard 4GL performsserialcomputations, i.e., instructions are executed one
after another on a single computer having a single CPU. However, the advent of
clusters,^5 workstations and single computers with multiple processors have made
parallel computing a tangible possibility to most economists.Parallel computingis
the simultaneous use of more than one CPU to execute a program or solve a compu-
tational problem. The problem is broken into parts that can be solved concurrently.
Each part is further broken down to a series of instructions that are executed simul-
taneously on different CPUs. This is achieved by using a language-independent
communications protocol known asMessage Passing Interface(MPI).
Parallel computing means that computations that would otherwise take hours
or days could be performed in minutes or hours. Computational algorithms that
allow parallelization commonly involve iterative loops that can be performed inde-
pendently of each other. Examples include most resampling methods, such as
the bootstrap, or set search algorithms often used in cross-validation methods
in non/semiparametric techniques (see section 15.4.1.1 for an illustration). Creel
(2005) discusses many other econometric examples that allow for parallel compu-
tation, such as Monte Carlo simulation, maximum likelihood (ML) and generalized
method of moments (GMM) estimation.
Most 4GL used in econometrics either allow the parallelization of procedures
or make full use of multiprocessor computers. Examples includeMatlabMPIin
MATLAB,Rmpi, andSnowin R for the former, and Stata/MP®for the latter.


15.3 Computing and modeling


There are many ways in which more memory and computing power can potentially
improve the quality of empirical analysis in microeconometrics.
The first context is that of the storage and manipulation of large complex
datasets, and in providing numerical, graphical and visual displays of data in ways
that provide valuable insights into the pattern and structure within such datasets.
The second context is that of solving (especially) high-dimensional optimization
problems that arise in model estimation. Estimation of many standard microe-
conometric models involves solution of nonlinear equations by iterative methods,
which are generically referred to as optimizers. Efficient optimizers that can handle
high-dimensional problems are essential in microeconometric modeling.
Increasingly, computer-intensive methods such as Monte Carlo simulators are
an important tool for studying the finite sample properties of estimators and tests.
Simulation is also an essential component in estimating model parameters, as in the
case of simulation-assisted optimization and Markov chain Monte Carlo (MCMC)
methods used in Bayesian modeling.

Free download pdf