Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
11.4. Modern Bayesian Methods 683

Consider, then, the likelihood function

m(x|γ)=

∫∞

−∞

f(x|θ)h(θ|γ)dθ. (11.4.13)

Using the pdfm(x|γ), we obtain an estimatêγ =̂γ(x), usually by the method
of maximum likelihood. For inference on the parameterθ, the empirical Bayes
procedure uses the posterior pdfk(θ|x,̂γ).
We illustrate the empirical Bayes procedure with the following example.


Example 11.4.3.Consider the same situation discussed in Example 11.4.2, except
assume that we have a random sample onX; i.e., consider the model


Xi|λ, i=1, 2 ,...,n ∼ iid Poisson(λ)
Λ|b ∼ Γ(1,b).

LetX=(X 1 ,X 2 ,...,Xn)′. Hence,


g(x|λ)=

λnx
x 1 !···xn!
e−nλ,

wherex=n−^1


∑n
i=1xi. Thus, the pdf we need to maximize is

m(x|b)=

∫∞

0

g(x|λ)h(λ|b)dλ

=

∫∞

0

1
x 1 !···xn!

λnx+1−^1 e−nλ
1
b

e−λ/bdλ

=

Γ(nx+1)[b/(nb+1)]nx+1
x 1 !···xn!b
.

Taking the partial derivative of logm(x|b) with respect tob,weobtain
∂logm(x|b)
∂b

=−

1
b

+(nx+1)

1
b(bn+1)

.

Setting this equal to 0 and solving forb, we obtain the solution
̂b=x. (11.4.14)

To obtain the empirical Bayes estimate ofλ, we need to compute the posterior pdf
witĥbsubstituted forb. The posterior pdf is


k(λ|x,̂b) ∝ g(x|λ)h(λ|̂b)
∝ λnx+1−^1 e−λ[n+(1/
bb)]
, (11.4.15)

which is the pdf of a Γ(nx+1,̂b/[n̂b+ 1]) distribution. Therefore, the empirical
Bayes estimator under squared-error loss is the mean of this distribution; i.e.,


̂λ=[nx+1]

̂b
n̂b+1

=x, (11.4.16)

sincêb=x. Thus, for the above prior, the empirical Bayes estimate agrees with
the mle.

Free download pdf