11.1. Bayesian Procedures 661Example 11.1.3.For this example, we have the normal model,Xi|θ ∼ iidN(θ, σ^2 ),whereσ^2 is known
Θ ∼ N(θ 0 ,σ 02 ),whereθ 0 andσ^20 are known.ThenY=Xis a sufficient statistic. Hence an equivalent formulation of the model
is
Y|θ ∼ N(θ, σ^2 /n),whereσ^2 is known
Θ ∼ N(θ 0 ,σ^20 ),whereθ 0 andσ^20 are known.Then for the posterior pdf, we havek(θ|y)∝1
√
2 πσ/√
n1
√
2 πσ 0exp[
−(y−θ)^2
2(σ^2 /n)−(θ−θ 0 )^2
2 σ 02]
.If we eliminate all constant factors (including factors involving onlyy), we havek(θ|y)∝exp[
−[σ 02 +(σ^2 /n)]θ^2 −2[yσ^20 +θ 0 (σ^2 /n)]θ
2(σ^2 /n)σ^20]
.This can be simplified by completing the square to read (after eliminating factors
not involvingθ)k(θ|y)∝exp⎡
⎢
⎢
⎢
⎣−(
θ−yσ 02 +θ 0 (σ^2 /n)
σ 02 +(σ^2 /n)) 22(σ^2 /n)σ^20
[σ^20 +(σ^2 /n)]⎤
⎥
⎥
⎥
⎦.That is, the posterior pdf of the parameter is obviously normal with mean
yσ^20 +θ 0 (σ^2 /n)
σ^20 +(σ^2 /n)=(
σ 02
σ^20 +(σ^2 /n))
y+(
σ^2 /n
σ^20 +(σ^2 /n))
θ 0 (11.1.11)and variance (σ^2 /n)σ^20 /[σ 02 +(σ^2 /n)]. If the squared-error loss function is used, this
posterior mean is the Bayes estimator. Again, note that it is a weighted average
of the maximum likelihood estimatey =xand the prior meanθ 0 .Asinthe
last example, for largen, the Bayes estimator is close to the maximum likelihood
estimator andδ(Y) is a consistent estimator ofθ. Thus the Bayesian procedures
permit the decision maker to enter his or her prior opinions into the solution in a
very formal way such that the influences of these prior notions are less and less as
nincreases.
In Bayesian statistics, all the information is contained in the posterior pdfk(θ|y).
In Examples 11.1.2 and 11.1.3, we found Bayesian point estimates using the squared-
error loss function. It should be noted that ifL[δ(y),θ]=|δ(y)−θ|,theabsolute
value of the error, then the Bayes solution would be the median of the posterior
distribution of the parameter, which is given byk(θ|y). Hence the Bayes estimator
changes,as it should, with different loss functions.