Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
7.3. Properties of a Sufficient Statistic 427

That is, through this conditioning, the functionφ(Y 1 ) of the sufficient statisticY 1
is an unbiased estimator ofθhaving a smaller variance than that of the unbiased
estimatorY 2. We summarize this discussion more formally in the following theorem,
which can be attributed to Rao and Blackwell.

Theorem 7.3.1(Rao–Blackwell). LetX 1 ,X 2 ,...,Xn,na fixed positive integer,
denote a random sample from a distribution (continuous or discrete) that has pdf or
pmff(x;θ),θ∈Ω.LetY 1 =u 1 (X 1 ,X 2 ,...,Xn)be a sufficient statistic forθ,and
letY 2 =u 2 (X 1 ,X 2 ,...,Xn),notafunctionofY 1 alone, be an unbiased estimator
ofθ.ThenE(Y 2 |y 1 )=φ(y 1 )defines a statisticφ(Y 1 ). This statisticφ(Y 1 )is a
function of the sufficient statistic forθ; it is an unbiased estimator ofθ;andits
variance is less than or equal to that ofY 2.


This theorem tells us that in our search for an MVUE of a parameter, we may,
if a sufficient statistic for the parameter exists, restrict that search to functions of
the sufficient statistic. For if we begin with an unbiased estimatorY 2 alone, then
we can always improve on this by computingE(Y 2 |y 1 )=φ(y 1 )sothatφ(Y 1 )isan
unbiased estimator with a smaller variance than that ofY 2.
After Theorem 7.3.1, many students believe that it is necessary to find first
some unbiased estimatorY 2 in their search forφ(Y 1 ), an unbiased estimator ofθ
based upon the sufficient statisticY 1. This is not the case at all, and Theorem 7.3.1
simply convinces us that we can restrict our search for a best estimator to functions
ofY 1. Furthermore, there is a connection between sufficient statistics and maximum
likelihood estimates, as shown in the following theorem:


Theorem 7.3.2.LetX 1 ,X 2 ,...,Xndenote a random sample from a distribution
that has pdf or pmff(x;θ),θ∈Ω. If a sufficient statisticY 1 =u 1 (X 1 ,X 2 ,...,Xn)
forθexists and if a maximum likelihood estimatorθˆofθalso exists uniquely, then
θˆis a function ofY 1 =u 1 (X 1 ,X 2 ,...,Xn).


Proof.LetfY 1 (y 1 ;θ)bethepdforpmfofY 1. Then by the definition of sufficiency,
the likelihood function


L(θ;x 1 ,x 2 ,...,xn)=f(x 1 ;θ)f(x 2 ;θ)···f(xn;θ)
= fY 1 [u 1 (x 1 ,x 2 ,...,xn);θ]H(x 1 ,x 2 ,...,xn),

whereH(x 1 ,x 2 ,...,xn) does not depend uponθ.ThusLandfY 1 , as functions
ofθ, are maximized simultaneously. Since there is one and only one value ofθ
that maximizesLand hencefY 1 [u 1 (x 1 ,x 2 ,...,xn);θ], that value ofθmust be a
function ofu 1 (x 1 ,x 2 ,...,xn). Thus the mleθˆis a function of the sufficient statistic
Y 1 =u 1 (X 1 ,X 2 ,...,Xn).


We know from Chapters 4 and 6 that, generally, mles are asymptotically unbi-
ased estimators ofθ. Hence, one way to proceed is to find a sufficient statistic and
then find the mle. Based on this, we can often obtain an unbiased estimator that
is a function of the sufficient statistic. This process is illustrated in the following
example.

Free download pdf