Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
7.1. Measures of Quality of Estimators 415

If, in fact,θ=0,thenδ 2 (y) = 0 is an excellent decision and we haveR(0,δ 2 )=0.
However, ifθdiffers from zero by very much, it is equally clear thatδ 2 =0isapoor
decision. For example, if, in fact,θ=2,R(2,δ 2 )=4>R(2,δ 1 )= 251. In general,
we see thatR(θ, δ 2 )<R(θ, δ 1 ), provided that−^15 <θ<^15 , and that otherwise
R(θ, δ 2 )≥R(θ, δ 1 ). That is, one of these decision functions is better than the other
for some values ofθ, while the other decision function is better for other values of
θ. If, however, we had restricted our consideration to decision functionsδsuch that
E[δ(Y)] =θfor all values ofθ, θ∈Ω, then the decision functionδ 2 (y)=0isnot
allowed. Under this restriction and with the givenL[θ, δ(y)], the risk function is the
variance of the unbiased estimatorδ(Y), and we are confronted with the problem of
finding the MVUE. Later in this chapter we show that the solution isδ(y)=y=x.
Suppose, however, that we do not want to restrict ourselves to decision functions
δ, such thatE[δ(Y)] =θfor all values ofθ, θ∈Ω. Instead, let us say that
the decision function that minimizes the maximum of the risk function is the best
decision function. Because, in this example,R(θ, δ 2 )=θ^2 is unbounded,δ 2 (y)=0
is not, in accordance with this criterion, a good decision function. On the other
hand, with−∞<θ<∞,wehave


max
θ

R(θ, δ 1 )=max
θ

( 251 )= 251.

Accordingly,δ 1 (y)=y=xseems to be a very good decision in accordance with
this criterion because 251 is small. As a matter of fact, it can be proved thatδ 1 is
the best decision function, as measured by theminimax criterion, when the loss
function isL[θ, δ(y)] = [θ−δ(y)]^2.


In this example we illustrated the following:


  1. Without some restriction on the decision function, it is difficult to find a
    decision function that has a risk function which is uniformly less than the risk
    function of another decision.

  2. One principle of selecting a best decision function is called theminimax
    principle. This principle may be stated as follows: If the decision function
    given byδ 0 (y) is such that, for allθ∈Ω,


max
θ

R[θ, δ 0 (y)]≤max
θ

R[θ, δ(y)]

for every other decision functionδ(y), thenδ 0 (y) is called aminimax deci-
sion function.

With the restrictionE[δ(Y)] =θand the loss functionL[θ, δ(y)] = [θ−δ(y)]^2 ,
the decision function that minimizes the risk function yields an unbiased estimator
with minimum variance. If, however, the restrictionE[δ(Y)] =θis replaced by some
other condition, the decision functionδ(Y), if it exists, which minimizesE{[θ−
δ(Y)]^2 }uniformly inθis sometimes called theminimum mean-squared-error
estimator. Exercises 7.1.6–7.1.8 provide examples of this type of estimator.
There are two additional observations about decision rules and loss functions
that should be made at this point. First, sinceY is a statistic, the decision rule

Free download pdf