Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
680 Bayesian Statistics

ith step of the algorithm is
Θi|x,γi− 1 ∼ g(θ|x,γi− 1 )
Γi|x,θi ∼ g(γ|x,θi).

Recall from our discussion in Section 11.3 that


Θi
D
→ k(θ|x)
Γi
D
→ g(γ|x),

asi→∞. Furthermore, the arithmetic average


1
m

∑m

i=1

W(Θi)
P
→E[W(Θ)|x]=δW(x)asm→∞. (11.4.4)

In practice, to obtain the Bayes estimate ofW(θ) by the Gibbs sampler, we
generate by Monte Carlo the stream of values (θ 1 ,γ 1 ),(θ 2 ,γ 2 ).... Then choosing
large values ofmandn∗>m, our estimate ofW(θ) is the average,


1
n∗−m

∑n∗

i=m+1

W(θi). (11.4.5)

Because of the Monte Carlo generation these procedures are often calledMCMC,
forMarkov Chain Monte Carloprocedures. We next provide two examples.


Example 11.4.1.Reconsider the conjugate family of normal distributions dis-
cussed in Example 11.1.3, withθ 0 =0. Hereweusethemodel

X|Θ ∼ N

(
θ,

σ^2
n

)
,σ^2 is known

Θ|τ^2 ∼ N(0,τ^2 )
1
τ^2

∼ Γ(a, b),aandbare known. (11.4.6)

To set up the Gibbs sampler for this hierarchical Bayes model, we need the condi-
tional pdfsg(θ|x, τ^2 )andg(τ^2 |x, θ). For the first, we have


g(θ|x, τ^2 )∝f(x|θ)h(θ|τ^2 )ψ(τ−^2 ).

As we have been doing, we can ignore standardizing constants; hence, we need
only consider the productf(x|θ)h(θ|τ^2 ). But this is a product of two normal pdfs
which we obtained in Example 11.1.3. Based on those results,g(θ|x, τ^2 )isthepdf
of aN({τ^2 /[(σ^2 /n)+τ^2 ]}x,(τ^2 σ^2 )/[σ^2 +nτ^2 ]). For the second pdf, by ignoring
standardizing constants and simplifying, we obtain


g

(
1
τ^2

|x, θ

)
∝ f(x|θ)g(θ|τ^2 )ψ(1/τ^2 )


1
τ

exp

{

1
2

θ^2
τ^2

}(
1
τ^2

)a− 1
exp

{

1
τ^2

1
b

}


(
1
τ^2

)a+(1/2)− 1
exp

{

1
τ^2

[
θ^2
2
+

1
b

]}
, (11.4.7)
Free download pdf