Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
6.4. Multiparameter Case: Estimation 389

Using (6.4.6) and (6.4.7) together, we obtain

Ijk=−E

[
∂^2
∂θj∂θk

logf(X;θ)

]

. (6.4.8)


Information for a random sample follows in the same way as the scalar case. The
pdf of the sample is the likelihood functionL(θ;X). Replacef(X;θ)byL(θ;X)
in the vector given in expression (6.4.3). Because logLis a sum, this results in the
random vector


logL(θ;X)=

∑n

i=1

logf(Xi;θ). (6.4.9)

Because the summands are iid with common covariance matrixI(θ), we have

Cov(logL(θ;X)) =nI(θ). (6.4.10)

As in the scalar case, the information in a random sample of sizenisntimes the
information in a sample of size 1.
The diagonal entries ofI(θ)are


Iii(θ)=Var

[
∂logf(X;θ)
∂θi

]
=−E

[
∂^2
∂θ^2 i
logf(Xi;θ)

]
.

This is similar to the case whenθis a scalar, except nowIii(θ) is a function of the
vectorθ. Recall in the scalar case that (nI(θ))−^1 was the Rao-Cram ́er lower bound
for an unbiased estimate ofθ. There is an analog to this in the multiparameter case.
In particular, ifYj=uj(X 1 ,...,Xn)isanunbiasedestimateofθj,thenitcanbe
shown that
Var(Yj)≥
1
n


[
I−^1 (θ)

]
jj; (6.4.11)
see, for example, Lehmann (1983). As in the scalar case, we shall call an unbiased
estimateefficientif its variance attains this lower bound.
Example 6.4.3(Information Matrix for the Normal pdf). The log of aN(μ, σ^2 )
pdf is given by


logf(x;μ, σ^2 )=−

1
2

log 2π−logσ−

1
2 σ^2

(x−μ)^2. (6.4.12)

The first and second partial derivatives are


∂logf
∂μ

=

1
σ^2

(x−μ)

∂^2 logf
∂μ^2
= −

1
σ^2
∂logf
∂σ

= −

1
σ

+

1
σ^3

(x−μ)^2

∂^2 logf
∂σ^2
=

1
σ^2

3
σ^4
(x−μ)^2

∂^2 logf
∂μ∂σ

= −
2
σ^3

(x−μ).
Free download pdf