Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
390 Maximum Likelihood Methods

Upon taking the negative of the expectations of the second partial derivatives, the
information matrix for a normal density is

I(μ, σ)=

[ 1
σ^20

(^0) σ^22
]


. (6.4.13)


We may want the information matrix for (μ, σ^2 ). This can be obtained by taking
partial derivatives with respect toσ^2 instead ofσ; however, in Example 6.4.6,
we obtain it via a transformation. From Example 6.4.1, the maximum likelihood
estimates ofμandσ^2 arêμ=Xandσ̂^2 =(1/n)


∑n
i=1(Xi−X)

(^2) , respectively.
Based on the information matrix, we note thatXis an efficient estimate ofμfor
finite samples. In Example 6.4.6, we consider the sample variance.
Example 6.4.4(Information Matrix for a Location and Scale Family). Suppose
X 1 ,X 2 ,...,Xnis a random sample with common pdffX(x)=b−^1 f
(x−a
b
)
,−∞<
x<∞,where(a, b)isinthespaceΩ={(a, b):−∞<a<∞,b > 0 }andf(z)isa
pdf such thatf(z)>0for−∞<z<∞. As Exercise 6.4.10 shows, we can model
Xias
Xi=a+bei, (6.4.14)
where theeis are iid with pdff(z). This is called alocation and scale model(LASP).
Example 6.4.2 illustrated this model whenf(z) had the Laplace pdf. In Exercise
6.4.11, the reader is asked to show that the partial derivatives are

∂a
{
log
[
1
b
f
(
x−a
b
)]}
= −
1
b
f′
(x−a
b
)
f
(x−a
b
)

∂b
{
log
[
1
b
f
(
x−a
b
)]}
= −
1
b
[
1+
x−a
b f
′(x−a
b
)
f
(x−a
b
)
]
.
Using (6.4.5) and (6.4.6), we then obtain
I 11 =
∫∞
−∞
1
b^2
[
f′
(x−a
b
)
f
(x−a
b
)
] 2
1
b
f
(
x−a
b
)
dx.
Now make the substitutionz=(x−a)/b,dz=(1/b)dx.Thenwehave
I 11 =
1
b^2
∫∞
−∞
[
f′(z)
f(z)
] 2
f(z)dz; (6.4.15)
hence, information on the location parameteradoes not depend ona.AsExercise
6.4.11 shows, upon making this substitution, the other entries in the information
matrix are
I 22 =
1
b^2
∫∞
−∞
[
1+
zf′(z)
f(z)
] 2
f(z)dz (6.4.16)
I 12 =
1
b^2
∫∞
−∞
z
[
f′(z)
f(z)
] 2
f(z)dz. (6.4.17)

Free download pdf