Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
406 Maximum Likelihood Methods


  1. Maximization Step: Let


̂θ(m+1)=ArgmaxQ(θ|θ̂(m),x). (6.6.6)

Under strong assumptions, it can be shown thatθ̂(m)converges in probability
to the maximum likelihood estimate, asm→∞. We will not show these results,
but as the next theorem shows,̂θ(m+1)always increases the likelihood over̂θ(m).


Theorem 6.6.1.The sequence of estimatesθ̂(m), defined by Algorithm 6.6.1, sat-
isfies
L(̂θ(m+1)|x)≥L(θ̂(m)|x). (6.6.7)


Proof:Becauseθ̂(m+1)maximizesQ(θ|θ̂(m),x), we have


Q(̂θ(m+1)|θ̂(m),x)≥Q(̂θ(m)|θ̂(m),x);

that is,
Eθb(m)[logLc(̂θ(m+1)|x,Z)]≥Eθb(m)[logLc(̂θ(m)|x,Z)], (6.6.8)


where the expectation is taken under the pdfk(z|θ̂(m),x). By expression (6.6.3),
we can complete the proof by showing that

Ebθ(m)[logk(Z|̂θ(m+1),x)]≤Eθb(m)[logk(Z|θ̂(m),x)]. (6.6.9)

Keep in mind that these expectations are taken under the conditional pdf ofZgiven
̂θ(m)andx. An application of Jensen’s inequality, (1.10.5), yields

Eθb(m)

{
log

[
k(Z|θ̂(m+1),x)
k(Z|θ̂(m),x)

]}
≤ logEbθ(m)

[
k(Z|θ̂(m+1),x)
k(Z|θ̂(m),x)

]

=log


k(z|θ̂(m+1),x)
k(z|θ̂(m),x)

k(z|̂θ(m),x)dz

= log(1) = 0. (6.6.10)

This last result establishes (6.6.9) and, hence, finishes the proof.


As an example, supposeX 1 ,X 2 ,...,Xn 1 are iid with pdff(x−θ), for−∞<
x<∞,where−∞<θ<∞. Denote the cdf ofXibyF(x−θ). LetZ 1 ,Z 2 ,...,Zn 2
denote the censored observations. For these observations, we only know thatZj>a,
for someathat is known, and that theZjs are independent of theXis. Then the
observed and complete likelihoods are given by


L(θ|x)=[1−F(a−θ)]n^2

∏n^1

i=1

f(xi−θ) (6.6.11)

Lc(θ|x,z)=

∏n^1

i=1

f(xi−θ)

∏n^2

i=1

f(zi−θ). (6.6.12)
Free download pdf