Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
11.4. Modern Bayesian Methods 681

which is the pdf of a Γ{a+(1/2),[(θ^2 /2) + (1/b)]−^1 }distribution. Thus the Gibbs
sampler for this model is given by:


Θi|x, τi^2 − 1 ∼ N

(
τi^2 − 1
(σ^2 /n)+τi^2 − 1
x,

τi^2 − 1 σ^2
σ^2 +nτi^2 − 1

)

1
τi^2
|x,Θi ∼ Γ

(
a+

1
2

,

(
θ^2 i
2

+
1
b

)− 1 )
, (11.4.8)

fori=1, 2 ,...,m. As discussed above, for a specified values of largemandn∗>
m, we collect the chain’s values ((Θm,τm),(Θm+1,τm+1),...,(Θn∗,τn∗)) and then
obtain the Bayes estimate ofθ(assuming squared-error loss):


̂θ=^1
n∗−m

∑n∗

i=m+1

Θi. (11.4.9)

The conditional distribution of Θ givenxandτi− 1 , though, suggests the second
estimate given by


θ̂∗=^1
n∗−m

∑n∗

i=m+1

τi^2
τi^2 +(σ^2 /n)

x. (11.4.10)

Example 11.4.2. Lehmann and Casella (1998, p. 257) presented the following
hierarchical Bayes model:


X|λ ∼ Poisson(λ)
Λ|b ∼ Γ(1,b)
B ∼ g(b)=τ−^1 b−^2 exp{− 1 /bτ},b> 0 ,τ > 0.

For the Gibbs sampler, we need the two conditional pdfs,g(λ|x, b)andg(b|x, λ).
The joint pdf is
g(x, λ, b)=f(x|λ)h(λ|b)ψ(b). (11.4.11)

Based on the pdfs of the model, (11.4.11), for the first conditional pdf we have


g(λ|x, b) ∝ e−λ

λx
x!

1
b

e−λ/b

∝ λx+1−^1 e−λ[1+(1/b)], (11.4.12)

which is the pdf of a Γ(x+1,b/[b+ 1]) distribution.
For the second conditional pdf, we have


g(b|x, λ) ∝
1
b

e−λ/bτ−^1 b−^2 e−^1 /(bτ)

∝ b−^3 exp

{

1
b

[
1
τ

]}
.
Free download pdf