Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
6.1. Maximum Likelihood Estimation 359

Proof:First supposegis a one-to-one function. The likelihood of interest isL(g(θ)),
but becausegis one-to-one,

maxL(g(θ)) = max
η=g(θ)

L(η)=max
η
L(g−^1 (η)).

But the maximum occurs wheng−^1 (η)=̂θ; i.e., takeη̂=g(̂θ).
Supposegis not one-to-one. For eachηin the range ofg, define the set (preim-
age)
g−^1 (η)={θ:g(θ)=η}.


The maximum occurs atθ̂and the domain ofgis Ω, which coverŝθ. Hence,θ̂is
in one of these preimages and, in fact, it can only be in one preimage. Hence to
maximizeL(η), chooseη̂so thatg−^1 (̂η) is that unique preimage containinĝθ.Then
̂η=g(θ̂).

Consider Example 4.1.2, whereX 1 ,...,Xnare iid Bernoulli random variables
with probability of successp. As shown in this example,̂p=Xis the mle ofp.
Recall that in the large sample confidence interval for√ p, (4.2.7), an estimate of
p(1−p) is required. By Theorem 6.1.2, the mle of this quantity is


p̂(1−p̂).
We close this section by showing that maximum likelihood estimators, under
regularity conditions, are consistent estimators. Recall thatX′=(X 1 ,...,Xn).
Theorem 6.1.3.Assume thatX 1 ,...,Xnsatisfy the regularity conditions (R0)
through (R2), whereθ 0 is the true parameter, and further thatf(x;θ)is differen-
tiable with respect toθinΩ. Then the likelihood equation,



∂θ

L(θ)=0,

or equivalently

∂θ


l(θ)=0,

has a solution̂θnsuch thatθ̂n
P
→θ 0.


Proof:Becauseθ 0 is an interior point in Ω, (θ 0 −a, θ 0 +a)⊂Ω, for somea>0.
DefineSnto be the event

Sn={X:l(θ 0 ;X)>l(θ 0 −a;X)}∩{X:l(θ 0 ;X)>l(θ 0 +a;X)}.

By Theorem 6.1.1,P(Sn)→1. So we can restrict attention to the eventSn.But


onSn,l(θ) has a local maximum, say,̂θn, such thatθ 0 −a<̂θn<θ 0 +aand
l′(θ̂n) = 0. That is,


Sn⊂

{
X:|θ̂n(X)−θ 0 |<a

}

{
X:l′(̂θn(X)) = 0

}
.

Therefore,


1 = lim
n→∞
P(Sn)≤ lim
n→∞
P

[{
X:|̂θn(X)−θ 0 |<a

}

{
X:l′(̂θn(X)) = 0

}]
≤1;
Free download pdf