Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
414 Sufficiency

X=(X 1 +X 2 +···+X 9 )/9isN(θ,σ

2
9 ),Xis an unbiased estimator ofθ.Thestatistic
X 1 isN(θ, σ^2 ), soX 1 is also an unbiased estimator ofθ. Although the variance
σ^2
9 ofXis less than the varianceσ


(^2) ofX 1 , we cannot say, withn=9,thatXis
the minimum variance unbiased estimator (MVUE) ofθ; that definition requires
that the comparison be made with every unbiased estimator ofθ. To be sure, it is
quite impossible to tabulate all other unbiased estimators of this parameterθ,so
other methods must be developed for making the comparisons of the variances. A
beginning on this problem is made in this chapter.
Let us now discuss the problem of point estimation of a parameter from a slightly
different standpoint. LetX 1 ,X 2 ,...,Xndenote a random sample of sizenfrom a
distribution that has the pdff(x;θ),θ∈Ω. The distribution may be of either the
continuous or the discrete type. LetY=u(X 1 ,X 2 ,...,Xn)beastatisticonwhich
we wish to base a point estimate of the parameterθ.Letδ(y) be that function of
the observed value of the statisticY which is the point estimate ofθ.Thusthe
functionδdecidesthe value of our point estimate ofθandδis called adecision
functionor adecision rule. One value of the decision function, sayδ(y), is called
adecision. Thus a numerically determined point estimate of a parameterθis a
decision. Now a decision may be correct or it may be wrong. It would be useful to
have a measure of the seriousness of the difference, if any, between the true value
ofθand the point estimateδ(y). Accordingly, with each pair, [θ, δ(y)],θ∈Ω, we
associate a nonnegative numberL[θ, δ(y)] that reflects this seriousness. We call the
functionLtheloss function. The expected (mean) value of the loss function is
called therisk function.IffY(y;θ),θ∈Ω, is the pdf ofY, the risk function
R(θ, δ)isgivenby
R(θ, δ)=E{L[θ, δ(y)]}=
∫∞
−∞
L[θ, δ(y)]fY(y;θ)dy
ifYis a random variable of the continuous type. It would be desirable to select a
decision function that minimizes the riskR(θ, δ) for all values ofθ, θ∈Ω. But this
is usually impossible because the decision functionδthat minimizesR(θ, δ) for one
value ofθmay not minimizeR(θ, δ) for another value ofθ. Accordingly, we need
either to restrict our decision function to a certain class or to consider methods of
ordering the risk functions. The following example, while very simple, dramatizes
these difficulties.
Example 7.1.2.LetX 1 ,X 2 ,...,X 25 be a random sample from a distribution that
isN(θ,1), for−∞<θ<∞.LetY =X, the mean of the random sample, and
letL[θ, δ(y)] = [θ−δ(y)]^2. We shall compare the two decision functions given by
δ 1 (y)=yandδ 2 (y)=0for−∞<y<∞. The corresponding risk functions are
R(θ, δ 1 )=E[(θ−Y)^2 ]= 251
and
R(θ, δ 2 )=E[(θ−0)^2 ]=θ^2.

Free download pdf