xx Editors’ Introduction
of econometric techniques and applications, theory has responded to the much
richer sources of data that have become available, not only at a micro or indi-
vidual level, as indicated in Chapter 12, combined with increases in computing
power. As Banerjee and Wagner note, we now have long time series on macroeco-
nomic and industry-level data. Compared to just twenty years ago, there is thus a
wealth of data on micro, industry and macro-panels. A panel dataset embodies two
dimensions: the cross-section dimension and the time-series dimension, so that,
in a macro-context, for example, we can consider the question of convergence not
just of a single variable (say, of a real exchange rate to a comparator, be that a
PPP hypothetical or an alternative actual rate), but of a group of variables, which
is representative of the multidimensional nature of growth and cycles. A starting
point for such an analysis is to assess the unit root properties of panel data but,
as in the univariate case, issues such as dependency, the specification of determin-
istic terms, and the presence of structural breaks are key practical matters that, if
incorrectly handled, can lead to misleading conclusions. Usually, the question of
unit roots is a precursor to cointegration analysis, and Banerjee and Wagner guide
the reader through the central methods, most of which have been developed in
the last decade. Empirical illustrations, based on exchange rate pass-through in
the euro-area and the environmental Kuznets curve, complement the theoretical
analysis.
Whilst the emphasis in Chapter 13 is on panels of macroeconomic or industry-
level data, in Chapter 14, Colin Cameron, in the first of two chapters in Part
V, provides a survey of microeconometric methods, with an emphasis on recent
developments. The data underlying such developments are at the level of the
individual, households and firms. A prototypical question in microeconometrics
relates to the identification, estimation and evaluation of marginal effects using
individual-level data; for example, the effect on earnings of an additional year of
education. This example is often used to motivate some basic estimation meth-
ods, such as least squares, maximum likelihood and instrumental variables, in
undergraduate and graduate texts in econometrics, so it is instructive to see how
recent developments have extended these methods. The development of the basic
methods include generalized method of moments (GMM), empirical likelihood,
simulation-based methods, quantile regression and nonparametric and semipara-
metric estimation, whilst developments in inference include robustifying standard
tests and bootstrap methods. Apart from estimation and inference, Cameron con-
siders a number of other issues that occur frequently in microeconometric studies:
in particular, issues related to causation, as in estimating and evaluating treatment
effects; heterogeneity, for example due to regressors or unobservables; and the
nature of microeconometric data, such as survey data and the sampling scheme,
with problems such as missing data and measurement error.
The development of econometrics in the last decade or so in particular has been
symbiotic with the development of advances in computing, particularly that of per-
sonal computers. In Chapter 15, David Jacho-Chávez and Pravin Trivedi focus on
the relationship between empirical microeconometrics and computational consid-
erations, which they call, rather evocatively, a “matrimony” between computing