Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
366 Maximum Likelihood Methods

Differentiating with respect toθ,weobtain

k′(θ)=

∫∞

−∞

···

∫∞

−∞

u(x 1 ,x 2 ,...,xn)

[n

1

1
f(xi;θ)

∂f(xi;θ)
∂θ

]

×f(x 1 ;θ)···f(xn;θ)dx 1 ···dxn

=

∫∞

−∞

···

∫∞

−∞

u(x 1 ,x 2 ,...,xn)

[n

1

∂logf(xi;θ)
∂θ

]

×f(x 1 ;θ)···f(xn;θ)dx 1 ···dxn. (6.2.13)

Define the random variableZbyZ=


∑n
1 [∂logf(Xi;θ)/∂θ]. We know from (6.2.2)
and (6.2.11) thatE(Z)=0andVar(Z)=nI(θ), respectively. Also, equation
(6.2.13) can be expressed in terms of expectation ask′(θ)=E(YZ). Hencewehave

k′(θ)=E(YZ)=E(Y)E(Z)+ρσY


nI(θ),

whereρis the correlation coefficient betweenY andZ.UsingE(Z)=0,this
simplifies to


ρ=

k′(θ)
σY


nI(θ)

.

Becauseρ^2 ≤1, we have
[k′(θ)]^2
σY^2 nI(θ)


≤ 1 ,

which, upon rearrangement, is the desired result.


Corollary 6.2.1.Under the assumptions of Theorem 6.2.1, ifY=u(X 1 ,...,Xn)
is an unbiased estimator ofθ, so thatk(θ)=θ, then the Rao–Cram ́er inequality
becomes
Var(Y)≥


1
nI(θ)

.

Consider the Bernoulli model with probability of successθwhich was treated in
Example 6.2.1. In the example we showed that 1/nI(θ)=θ(1−θ)/n.FromExample
4.1.2 of Section 4.1, the mle ofθisX. The mean and variance of a Bernoulli (θ)
distribution areθandθ(1−θ), respectively. Hence the mean and variance ofX
areθandθ(1−θ)/n, respectively. That is, in this case the variance of the mle has
attained the Rao–Cram ́er lower bound.
We now make the following definitions.


Definition 6.2.1(Efficient Estimator).LetY be an unbiased estimator of a pa-
rameter θin the case of point estimation. The statisticY is called anefficient
estimatorofθif and only if the variance of Y attains the Rao–Cram ́er lower
bound.

Free download pdf