Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
660 Bayesian Statistics

Example 11.1.2.Consider the model

Xi|θ ∼ iid binomial,b(1,θ)
Θ ∼ beta(α, β),αandβare known;

that is, the prior pdf is

h(θ)=

{
Γ(α+β)
Γ(α)Γ(β)θ

α− (^1) (1−θ)β− (^10) <θ< 1
0elsewhere.,
whereαandβare assigned positive constants. We seek a decision functionδthat
is a Bayes solution. The sufficient statistic isY =
∑n
1 Xi, which has ab(n, θ)
distribution. Thus the conditional pdf ofY given Θ =θis
g(y|θ)=
{ (n
y
)
θy(1−θ)n−y y=0, 1 ,...,n
0elsewhere.
Thus, by (11.1.9), the conditional pdf of Θ, givenY =yat points of positive
probability density, is
k(θ|y)∝θy(1−θ)n−yθα−^1 (1−θ)β−^1 , 0 <θ< 1.
That is,
k(θ|y)=
Γ(n+α+β)
Γ(α+y)Γ(n+β−y)
θα+y−^1 (1−θ)β+n−y−^1 , 0 <θ< 1 ,
andy=0, 1 ,...,n. Hence the posterior pdf is a beta density function with param-
eters (α+y, β+n−y). We take the squared-error loss, i.e.,L[θ, δ(y)] = [θ−δ(y)]^2 ,
as the loss function. Then, the Bayesian point estimate ofθis the mean of this beta
pdf, which is
δ(y)=
α+y
α+β+n
.
It is very instructive to note that this Bayes estimator can be written as
δ(y)=
(
n
α+β+n
)
y
n



  • (
    α+β
    α+β+n
    )
    α
    α+β
    ,
    which is a weighted average of the maximum likelihood estimatey/nofθand the
    meanα/(α+β) of the prior pdf of the parameter. Moreover, the respective weights
    aren/(α+β+n)and(α+β)/(α+β+n). Note that for largen,theBayesestimate
    is close to the maximum likelihood estimate ofθand that, furthermore,δ(Y)isa
    consistent estimator ofθ.Thusweseethatαandβshould be selected so that not
    only isα/(α+β) the desired prior mean, but the sumα+βindicates the worth of
    the prior opinion relative to a sample of sizen. That is, if we want our prior opinion
    to have as much weight as a sample size of 20, we would takeα+β= 20. So if our
    prior mean is^34 ,wehavethatαandβare selected so thatα=15andβ=5.

Free download pdf