xxx Editors’ Introduction
offers a fruitful way of combining different methods to inform the overall view on
convergence.
Part X comprises two chapters on applied econometrics and its relationship
to computing. In Chapter 28, Bruce McCullough considers the problem of test-
ing econometric software. The importance of this issue is hard to understate.
Econometric programs that are inaccurate, for any reason, will produce misleading
results not only for the individual researcher but, if published, for the profession
more generally, and will lead to applications that are impossible to replicate. The
development of sophisticated methods of estimation means that we must also be
ever-vigilant in ensuring that software meets established standards of accuracy. A
seminal contribution to the development of accuracy benchmarks was Longley
(1967). As McCullough notes, Longley worked out by hand the solution to a lin-
ear regression problem with a constant and six explanatory variables. When run
through the computers of the time, he found that the answers were worryingly
different. Of course, the Longley benchmark is now passed by the economet-
ric packages that are familiar to applied econometricians. However, the nature
of the problems facing the profession is different (sophisticated estimators, large
datasets, simulation-based estimators) and McCullough’s results imply that there
is no reason for complacency. Many econometric estimators involve problems of
a nonlinear nature – for example, the GARCH and multivariate GARCH estima-
tors and the probit estimator – and it is in the case where a nonlinear solver is
involved that the user will find problems, especially when relying on the default
options. Another area that has seen substantial growth in the last two decades has
been the use of Monte Carlo experimentation, an area that makes fundamental
use of random numbers, and hence any package must have a reliable random
number generator (RNG). But are the numbers so generated actually random?
The answer is, not necessarily! (The reader may wish to refer to Volume 1 of this
Handbook, which includes a chapter by Jurgen Doornik on random number gener-
ation.) The importance of maintaining standards of numerical accuracy has been
recognised in the National Institute of Standards and Technology’s Statistical Ref-
erence Datasets, which has resulted in a number of articles using these datasets to
evaluate software for econometric problems. To illustrate some of the issues in soft-
ware evaluation, for example in establishing a benchmark, McCullough includes
a study of the accuracy of a number of packages to estimate ARMA models. The
central methods for the estimation of such models include unconditional least
squares (UCLS), conditional least squares (CLS), and exact maximum likelihood.
The questions of interest are not only in the accuracy of the point estimates from
these methods in different packages, but also what method of standard error cal-
culation is being used. Overall, McCullough concludes that we, as a profession,
have some way to go in ensuring that the software that is being used is accurate,
that the underlying methods are well-documented, and that published results are
replicable.
In Chapter 29, Marius Ooms takes a historical perspective on the nature of
applied econometrics as it has been represented by publications and reviews of
econometric and statistical software in theJournal of Applied Econometrics(JAE).