Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
11.1. Bayesian Procedures 659

an experimental value of any random variable, sayW, if we want our prediction to
be “reasonably close” to the value to be observed? Many statisticians would predict
the mean,E(W), of the distribution ofW; others would predict a median (perhaps
unique) of the distribution ofW; and some would have other predictions. However,
it seems desirable that the choice of the decision function should depend upon a loss
functionL[θ, δ(x)]. One way in which this dependence upon the loss function can
be reflected is to select the decision functionδin such a way that the conditional
expectation of the loss is a minimum. ABayes estimateis a decision functionδ
that minimizes

E{L[Θ,δ(x)]|X=x}=

∫∞

−∞

L[θ, δ(x)]k(θ|x)dθ

if Θ is a random variable of the continuous type. That is,


δ(x)=Argmin

∫∞

−∞

L[θ, δ(x)]k(θ|x)dθ. (11.1.10)

The associated random variableδ(X) is called aBayes estimatorofθ. The usual
modification of the right-hand member of this equation is made for random variables
of the discrete type. If the loss function is given byL[θ, δ(x)] = [θ−δ(x)]^2 ,then
the Bayes estimate isδ(x)=E(Θ|x), the mean of the conditional distribution of Θ,
givenX=x. This follows from the fact thatE[(W−b)^2 ], if it exists, is a minimum
whenb=E(W). If the loss function is given byL[θ, δ(x)] =|θ−δ(x)|,thena
median of the conditional distribution of Θ, givenX=x, is the Bayes solution.
This follows from the fact thatE(|W−b|), if it exists, is a minimum whenbis equal
to any median of the distribution ofW.
It is easy to generalize this to estimate a specified function ofθ,say,l(θ). For
the loss functionL[θ, δ(x)], aBayes estimateofl(θ) is a decision functionδthat
minimizes


E{L[l(Θ),δ(x)]|X=x}=

∫∞

−∞

L[l(θ),δ(x)]k(θ|x)dθ.

The random variableδ(X) is called aBayes estimatorofl(θ).
The conditional expectation of the loss, givenX=x, defines a random variable
that is a function of the sampleX. The expected value of that function ofX,in
the notation of this section, is given by
∫∞


−∞

{∫∞

−∞

L[θ, δ(x)]k(θ|x)dθ

}
g 1 (x)dx=

∫∞

−∞

{∫∞

−∞

L[θ, δ(x)]L(x|θ)dx

}
h(θ)dθ,

in the continuous case. The integral within the braces in the latter expression is,
for every givenθ∈Θ, therisk functionR(θ, δ); accordingly, the latter expression
is the mean value of the risk, or the expected risk. Because a Bayes estimateδ(x)
minimizes ∫∞


−∞

L[θ, δ(x)]k(θ|x)dθ

for everyxfor whichg(x)>0, it is evident that a Bayes estimateδ(x) minimizes
this mean value of the risk. We now give two illustrative examples.

Free download pdf