Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
416 Sufficiency

δ(Y) is also a statistic, and we could have started directly with a decision rule based
on the observations in a random sample, say,δ 1 (X 1 ,X 2 ,...,Xn). The risk function
is then given by


R(θ, δ 1 )=E{L[θ, δ 1 (X 1 ,...,Xn)]}

=

∫∞

−∞

···

∫∞

−∞

L[θ, δ 1 (x 1 ,...,xn)]f(x 1 ;θ)···f(xn;θ)dx 1 ···dxn

if the random sample arises from a continuous-type distribution. We do not do this,
because, as we show in this chapter, it is rather easy to find a good statistic, say
Y, upon which to base all of the statistical inferences associated with a particular
model. Thus we thought it more appropriate to start with a statistic that would
be familiar, like the mleY=Xin Example 7.1.2. The second decision rule of that
example could be writtenδ 2 (X 1 ,X 2 ,...,Xn) = 0, a constant no matter what values
ofX 1 ,X 2 ,...,Xnare observed.
The second observation is that we have only used one loss function, namely,
thesquared-error loss functionL(θ, δ)=(θ−δ)^2 .Theabsolute-error loss
functionL(θ, δ)=|θ−δ|is another popular one. The loss function defined by


L(θ, δ)=0, |θ−δ|≤a,
= b, |θ−δ|>a,

whereaandbare positive constants, is sometimes referred to as thegoalpost loss
function. The reason for this terminology is that football fans recognize that it
is similar to kicking a field goal: There is no loss (actually a three-point gain) if
withinaunits of the middle butbunits of loss (zero points awarded) if outside that
restriction. In addition, loss functions can be asymmetric as well as symmetric, as
the three previous ones have been. That is, for example, it might be more costly to
underestimate the value ofθthan to overestimate it. (Many of us think about this
type of loss function when estimating the time it takes us to reach an airport to
catch a plane.) Some of these loss functions are considered when studying Bayesian
estimates in Chapter 11.
Let us close this section with an interesting illustration that raises a question
leading to the likelihood principle, which many statisticians believe is a quality
characteristic that estimators should enjoy. Suppose that two statisticians,Aand
B, observe 10 independent trials of a random experiment ending in success or failure.
Let the probability of success on each trial beθ,where0<θ<1. Let us say that
each statistician observes one success in these 10 trials. Suppose, however, that
Ahad decided to taken= 10 such observations in advance and found only one
success, whileBhad decided to take as many observations as needed to get the first
success, which happened on the 10th trial. The model ofAis thatYisb(n=10,θ)
andy= 1 is observed. On the other hand,Bis considering the random variableZ
that has a geometric pmfg(z)=(1−θ)z−^1 θ, z=1, 2 , 3 ,...,andz= 10 is observed.
In either case, an estimate ofθcould be the relative frequency of success given by


y
n

=

1
z

=

1
10

.
Free download pdf