Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
7.6. Functions of a Parameter 443

7.6.8, the joint distribution ofX 1 andXis bivariate normal with mean vector (θ, θ),
variancesσ^21 =1andσ 22 =1/n, and correlation coefficientρ=1/



n.Thusthe
conditional pdf ofX 1 ,givenX=x, is normal with linear conditional mean


θ+
ρσ 1
σ 2
(x−θ)=x

and with variance
σ^21 (1−ρ^2 )=
n− 1
n

.

The conditional expectation ofu(X 1 ), givenX=x,isthen


φ(x)=

∫∞

−∞

u(x 1 )


n
n− 1

1

2 π

exp

[

n(x 1 −x)^2
2(n−1)

]
dx 1

=

∫c

−∞


n
n− 1

1

2 π

exp

[

n(x 1 −x)^2
2(n−1)

]
dx 1.

The change of variablez=


n(x 1 −x)/


n−1 enables us to write this conditional
expectation as

φ(x)=

∫c′

−∞

1

2 π

e−z

(^2) / 2
dz=Φ(c′)=Φ
[√
n(c−x)

n− 1
]
,
wherec′=

n(c−x)/

n−1. Thus the unique MVUE of Φ(c−θ)is,forevery
fixed constantc,givenbyφ(X)=Φ[

n(c−X)/

n−1].
In this example the mle of Φ(c−θ)isΦ(c−X). These two estimators are close
because

n/(n−1)→1, asn→∞.
Remark 7.6.1.We should like to draw the attention of the reader to a rather
important fact. This has to do with the adoption of aprinciple, such as the principle
of unbiasedness and minimum variance. A principle is not a theorem; and seldom
does a principle yield satisfactory results in all cases. So far, this principle has
provided quite satisfactory results. To see that this is not always the case, letX
have a Poisson distribution with parameterθ,0<θ<∞. We may look uponXas
a random sample of size 1 from this distribution. ThusXis a complete sufficient
statistic forθ. We seek the estimator ofe−^2 θthat is unbiased and has minimum
variance. ConsiderY=(−1)X.Wehave
E(Y)=E[(−1)X]=
∑∞
x=0
(−θ)xe−θ
x!
=e−^2 θ.
Accordingly, (−1)Xis the MVUE ofe−^2 θ. Here this estimator leaves much to be
desired. We are endeavoring to elicit some information about the numbere−^2 θ,
where 0<e−^2 θ<1; yet our point estimate is either−1 or +1, each of which is a
very poor estimate of a number between 0 and 1. We do not wish to leave the reader
with the impression that an MVUE isbad. That is not the case at all. We merely
wish to point out that if one tries hard enough, one can find instances where such
a statistic isnot good. Incidentally, the maximum likelihood estimator ofe−^2 θis,
in the case where the sample size equals 1,e−^2 X, which is probably a much better
estimator in practice than is the unbiased estimator (−1)X.

Free download pdf