Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
11.3. Gibbs Sampler 673

N(θ, σ^2 ), whereσ^2 is known. ThenY=Xis a sufficient statistic. Consider the
Bayes model

Y|θ ∼ N(θ, σ^2 /n)
Θ ∼ h(θ)∝b−^1 exp{−(θ−a)/b}/(1 + exp{−[(θ−a)/b]})^2 ,−∞<θ<∞,
aandb>0areknown, (11.3.1)
i.e., the prior is a logistic distribution. Thus the posterior pdf is

k(θ|y)=

√^1
2 πσ/√nexp

{
−^12 (y−θ)

2
σ^2 /n

}
b−^1 e−(θ−a)/b/(1 +e−[(θ−a)/b])^2
∫∞
−∞
√^1
2 πσ/√nexp

{
−^12 (y−θ)

2
σ^2 /n

}
b−^1 e−(θ−a)/b/(1 +e−[(θ−a)/b])^2 dθ

.

Assuming squared-error loss, the Bayes estimate is the mean of this posterior distri-
bution. Its computation involves two integrals, which cannot be obtained in closed
form. We can, however, think of the integration in the following way. Consider the
likelihoodf(y|θ) as a function ofθ; that is, consider the function


w(θ)=f(y|θ)=
1

2 πσ/


n

exp

{

1
2

(y−θ)^2
σ^2 /n

}
.

We can then write the Bayes estimate as


δ(y)=

∫∞
−∞θw(θ)b

− (^1) e−(θ−a)/b/(1 +e−[(θ−a)/b]) (^2) dθ
∫∞
−∞w(θ)b
− (^1) e−(θ−a)/b/(1 +e−[(θ−a)/b]) (^2) dθ


E[Θw(Θ)]
E[w(Θ)]
, (11.3.2)
where the expectation is taken with Θ having the logistic prior distribution.
The estimation can be carried out by simple Monte Carlo. Independently, gen-
erate Θ 1 ,Θ 2 ,...,Θmfrom the logistic distribution with pdf as in (11.3.1). This
generation is easily computed because the inverse of the logistic cdf is given by
a+blog{u/(1−u)},for0<u<1. Then form the random variable,
Tm=
m−^1
∑m
i=1Θiw(Θi)
m−^1
∑m
i=1w(Θi)


. (11.3.3)


By the Weak Law of Large Numbers (Theorem 5.1.1) and Slutsky’s Theorem (The-
orem 5.2.4),Tm→δ(y), in probability. The value ofmcan be quite large. Thus
simple Monte Carlo techniques enable us to compute this Bayes estimate. Note that
we can bootstrap this sample to obtain a confidence interval forE[Θw(Θ)]/E[w(Θ)];
see Exercise 11.3.2.
Besides simple Monte Carlo methods, there are other more complicated Monte
Carlo procedures that are useful in Bayesian inference. For motivation, consider
the case in which we want to generate an observation that has pdffX(x), but this
generation is somewhat difficult. Suppose, however, that it is easy to generate both
Y,withpdffY(y), and an observation from the conditional pdffX|Y(x|y). As the
following theorem shows, if we do these sequentially, then we can easily generate
fromfX(x).

Free download pdf