Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
368 Maximum Likelihood Methods

that the distribution is Γ(1, 1 /θ). Because theXis are independent, Theorem 3.3.1
shows thatW=

∑n
i=1Yiis Γ(n,^1 /θ). Theorem 3.3.2 shows that

E[Wk]=

(n+k−1)!
θk(n−1)!

, (6.2.17)

fork>−n. So, in particular fork=−1, we get


E[̂θ]=nE[W−^1 ]=θ

n
n− 1

.

Hence,θ̂is biased, but the bias vanishes asn→∞. Also, note that the estimator
[(n−1)/n]̂θis unbiased. Fork=−2, we get


E[̂θ^2 ]=n^2 E[W−^2 ]=θ^2

n^2
(n−1)(n−2)

,

and, hence, after simplifyingE(θ̂^2 )−[E(θ̂)]^2 ,we obtain


Var(̂θ)=θ^2

n^2
(n−1)^2 (n−2)

.

From this, we can obtain the variance of the unbiased estimator [(n−1)/n]θ̂, i.e.,

Var

(
n− 1
n

̂θ

)
=
θ^2
n− 2

.

From above, the information isI(θ)=θ−^2 and, hence, the variance of an unbiased
efficient estimator isθ^2 /n. Because θ


2
n− 2 >

θ^2
n, the unbiased estimator [(n−1)/n]
̂θ

is not efficient. Notice, though, that its efficiency (as in Definition 6.2.2) converges


to 1 asn→∞. Later in this section, we say that [(n−1)/n]θ̂is asymptotically
efficient.


In the above examples, we were able to obtain the mles in closed form along
with their distributions and, hence, moments. This is often not the case. Maximum
likelihood estimators, however, have an asymptotic normal distribution. In fact,
mles are asymptotically efficient. To prove these assertions, we need the additional
regularity condition given by


Assumptions 6.2.2(Additional Regularity Condition).Regularity condition (R5)
is


(R5) The pdff(x;θ)is three times differentiable as a function ofθ.Further,for
allθ∈Ω, there exist a constantcandafunctionM(x)such that




∂^3
∂θ^3

logf(x;θ)




∣≤M(x),

withEθ 0 [M(X)]<∞, for allθ 0 −c<θ<θ 0 +cand allxin the support of
X.
Free download pdf