Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
6.2. Rao–Cram ́er Lower Bound and Efficiency 365

where the last equality follows from the transformationz=x−θ. Hence, in the
location model, the information does not depend onθ.
As an illustration, reconsider Example 6.1.1 concerning the Laplace distribution.
LetX 1 ,X 2 ,...,Xnbe a random sample from this distribution. Then it follows that
Xican be expressed as
Xi=θ+ei, (6.2.9)
wheree 1 ,...,enare iid with common pdff(z)=2−^1 exp{−|z|},for−∞<z<∞.
As we did in Example 6.1.1, usedzd|z|=sgn(z). Thenf′(z)=− 2 −^1 sgn(z)exp{−|z|}
and, hence, [f′(z)/f(z)]^2 =[−sgn(z)]^2 =1,sothat

I(θ)=

∫∞

−∞

(
f′(z)
f(z)

) 2
f(z)dz=

∫∞

−∞

f(z)dz=1. (6.2.10)

Note that the Laplace pdf does not satisfy the regularity conditions, but this argu-
ment can be made rigorous; see Huber (1981) and also Chapter 10.


From (6.2.6), for a sample of size 1, sayX 1 , Fisher information is the vari-
ance of the random variable ∂logf∂θ(X^1 ;θ). What about a sample of sizen?Let
X 1 ,X 2 ,...,Xnbe a random sample from a distribution having pdff(x;θ). The
likelihoodL(θ) is the pdf of the random sample, and the random variable whose
variance is the information in the sample is given by


∂logL(θ,X)
∂θ

=

∑n

i=1

∂logf(Xi;θ)
∂θ

.

The summands are iid with common varianceI(θ). Hence the information in the
sample is
Var

(
∂logL(θ,X)
∂θ

)
=nI(θ). (6.2.11)

Thus the information in a random sample of sizenisntimes the information in a
sample of size 1. So, in Example 6.2.1, the Fisher information in a random sample
of sizenfrom a Bernoullib(1,θ) distribution isn/[θ(1−θ)].
We are now ready to obtain the Rao–Cram ́er lower bound, which we state as a
theorem.


Theorem 6.2.1(Rao–Cram ́er Lower Bound).LetX 1 ,...,Xnbe iid with common
pdff(x;θ)forθ∈Ω. Assume that the regularity conditions (R0)–(R4) hold. Let
Y =u(X 1 ,X 2 ,...,Xn)be a statistic with meanE(Y)=E[u(X 1 ,X 2 ,...,Xn)] =
k(θ).Then


Var(Y)≥

[k′(θ)]^2
nI(θ)

. (6.2.12)


Proof: The proof is for the continuous case, but the proof for the discrete case is
quite similar. Write the mean ofYas


k(θ)=

∫∞

−∞

···

∫∞

−∞

u(x 1 ,...,xn)f(x 1 ;θ)···f(xn;θ)dx 1 ···dxn.
Free download pdf