Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
462 Sufficiency

Theorem 7.9.1.LetX 1 ,X 2 ,...,Xndenote a random sample from a distribution
having a pdff(x;θ),θ∈Ω,whereΩis an interval set. Suppose that the statistic
Y 1 is a complete and sufficient statistic forθ.LetZ=u(X 1 ,X 2 ,...,Xn)be any
other statistic (not a function ofY 1 alone). If the distribution ofZdoes not depend
uponθ,thenZis independent of the sufficient statisticY 1.


In the discussion above, it is interesting to observe that ifY 1 is a sufficient
statistic forθ, then the independence ofY 1 andZimplies that the distribution
ofZdoes not depend uponθwhether{g 1 (y 1 ;θ):θ∈Ω}is or is not complete.
Conversely, to prove the independence from the fact thatg 2 (z) does not depend
uponθ, we definitely need the completeness. Accordingly, if we are dealing with
situations in which we know that family{g 1 (y 1 ;θ):θ∈Ω}is complete (such as a
regular case of the exponential class), we can say that the statisticZis independent
of the sufficient statisticY 1 if and only if the distribution ofZdoes not depend
uponθ(i.e.,Zis an ancillary statistic).
It should be remarked that the theorem (including the special formulation of
it for regular cases of the exponential class) extends immediately to probability
density functions that involvemparameters for which there existmjoint sufficient
statistics. For example, letX 1 ,X 2 ,...,Xnbe a random sample from a distribution
having the pdff(x;θ 1 ,θ 2 ) that represents a regular case of the exponential class
so that there are two joint complete sufficient statistics forθ 1 andθ 2 .Thenany
other statisticZ=u(X 1 ,X 2 ,...,Xn) is independent of the joint complete sufficient
statistics if and only if the distribution ofZdoes not depend uponθ 1 orθ 2.
We present an example of the theorem that provides an alternative proof of the
independence ofXandS^2 , the mean and the variance of a random sample of sizen
from a distribution that isN(μ, σ^2 ). This proof is given as if we were unaware that
(n−1)S^2 /σ^2 isχ^2 (n−1), because that fact and the independence were established
in Theorem 3.6.1.


Example 7.9.1. LetX 1 ,X 2 ,...,Xndenote a random sample of sizenfrom a
distribution that isN(μ, σ^2 ). We know that the meanXof the sample is, for
every knownσ^2 , a complete sufficient statistic for the parameterμ,−∞<μ<∞.
Consider the statistic
S^2 =


1
n− 1

∑n

i=1

(Xi−X)^2 ,

which is location-invariant. ThusS^2 must have a distribution that does not depend
uponμ; and hence, by the theorem,S^2 andX, the complete sufficient statistic for
μ, are independent.


Example 7.9.2.LetX 1 ,X 2 ,...,Xnbe a random sample of sizenfrom the distri-
bution having pdf


f(x;θ)=exp{−(x−θ)},θ<x<∞, −∞<θ<∞,
= 0 elsewhere.

Here the pdf is of the formf(x−θ), wheref(w)=e−w, 0 <w<∞, zero elsewhere.
Moreover, we know (Exercise 7.4.5) that the first order statisticY 1 =min(Xi)isa
Free download pdf