Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
370 Maximum Likelihood Methods

It follows from (6.2.23)–(6.2.25) that

n≥max{N 1 ,N 2 }⇒P

[∣∣

∣−

1
n
l′′′(θ∗n)




∣≤1+Eθ^0 [M(X)]

]
≥ 1 −
2
;

hence,n−^1 l′′′(θ∗n) is bounded in probability.


We next generalize Definitions 6.2.1 and 6.2.2 concerning efficiency to the asymp-
totic case.


Definition 6.2.3. LetX 1 ,...,Xnbe independent and identically distributed with
probability density functionf(x;θ).Supposeˆθ 1 n=θˆ 1 n(X 1 ,...,Xn)is an estimator


ofθ 0 such that



n(θˆ 1 n−θ 0 )→DN

(
0 ,σ^2 ˆθ
1 n

)
.Then

(a)Theasymptotic efficiencyofθˆ 1 nis defined to be

e(θˆ 1 n)=

1 /I(θ 0 )
σ^2 θˆ 1 n

. (6.2.26)


(b) The estimatorθˆ 1 nis said to beasymptotically efficientif the ratio in part
(a) is 1.


(c)Letˆθ 2 nbe another estimator such that


n(θˆ 2 n−θ 0 )
D
→N

(
0 ,σ^2 θˆ 2 n

)
.Then the
asymptotic relative efficiency(ARE) ofˆθ 1 ntoθˆ 2 nis the reciprocal of the
ratio of their respective asymptotic variances; i.e.,

e(θˆ 1 n,θˆ 2 n)=

σθ^2 ˆ
2 n
σθ^2 ˆ
1 n

. (6.2.27)


Hence, by Theorem 6.2.2, under regularity conditions, maximum likelihood es-
timators are asymptotically efficient estimators. This is a nice optimality result.
Also, if two estimators are asymptotically normal with the same asymptotic mean,
then intuitively the estimator with the smaller asymptotic variance would be se-
lected over the other as a better estimator. In this case, the ARE of the selected
estimator to the nonselected one is greater than 1.


Example 6.2.5 (ARE of the Sample Median to the Sample Mean). We obtain
this ARE under the Laplace and normal distributions. Consider first the Laplace
location model as given in expression (6.2.9); i.e.,


Xi=θ+ei,i=1,..., n. (6.2.28)

By Example 6.1.1, we know that the mle ofθisthesamplemedian,Q 2. By (6.2.10),
the informationI(θ 0 ) = 1 for this distribution; hence,Q 2 is asymptotically normal
with meanθand variance 1/n. On the other hand, by the Central Limit Theorem,

Free download pdf