Palgrave Handbook of Econometrics: Applied Econometrics

(Grace) #1
A. Colin Cameron 739

dependent variables models with normal errors the Geweke–Hajivassiliou–Keane
(GHK) simulator is often used. Second, to enable convergence and avoid “chatter”


the same underlying random numbers used to obtainuSishould be used at each iter-
ation. Third, the draws fromg(ui|θ 2 )need not be independent. For example, better
approximations for givenSmay be obtained by using dependent quasi-random
numbers, such as Halton sequences, rather than independent pseudo-random
numbers, and by the use of antithetic sampling. Fourth, it may be difficult to


make draws fromuSiusing standard methods such as inverse transformation and
accept–reject methods. Newer Markov chain Monte Carlo methods, widely used
in Bayesian analysis, may be then used.


14.3.4 Simulation-based Bayesian analysis


Bayesian analysis can serve two purposes. First, it can provide a quite different
method of inference as it views parameters as random variables, with the goal being
to combine the prior distribution on parameters and the sample distribution of the
data to recover the posterior distribution of these parameters. By contrast, classical
frequentist inference views parameters as taking fixed values that are unknown,
with data used to make inference on those unknown values. Second, if priors are
chosen to be sufficiently uninformative so that they have little effect on the pos-
terior distribution, then it is possible to directly use the posterior distribution of
parameters to perform classical frequentist inference.
Econometric applications most often use Bayesian methods for the second pur-
pose. Recent advances in computational methods, outlined below, can make
Bayesian methods especially useful in analytically intractable models with many
parameters that may be very difficult to estimate using conventional ML or even
simulated ML methods. Even so, Bayesian methods are used sparingly in econo-
metrics when compared to the statistics literature. One reason is a hesitation to
use fully parametric methods, though Bayesian methods do allow quite flexible
parametric models to be specified.
LetL


(
y|X,θ

)
=f(y|X,θ)denote the sample joint density or likelihood, andπ(θ)
denote the prior distribution. Then by Bayes’ rule the posterior density forθis:


p(θ|y,X)=

L(y|X,θ)π(θ)
f(y|X)
, (14.14)

wheref(y|X)=



R(θ)L(y|X,θ)π(θ)dθandR(θ)denotes the support ofπ(θ). Because
the denominatorf(y|X)is free ofθ, it is standard simply to write:


p(θ|y)∝L(y|θ)π(θ), (14.15)

where the regressorsXare suppressed for notational simplicity. The posterior
density is proportional to the product of the likelihood and prior.
The heart of Bayesian analysis is the posteriorp(θ|y). In the simplest cases a
closed form expression for this exists. For example, ifyis normal with meanXβ
and known variance and the prior forβis the normal with specified mean and
variance, then the posterior is normal.

Free download pdf