658 Bayesian Statistics
provided that 0<θ<∞andxi=0, 1 , 2 ,..., i=1, 2 ,...,n. However,c(x)must
be that “constant” needed to makek(θ|x)apdf,namely,
c(x)=
1
Γ
(∑
xi+α
)
[β/(nβ+1)]
Px
i+α
Accordingly, we frequently write thatk(θ|x) is proportional toL(x|θ)h(θ); that is,
the posterior pdf can be written as
k(θ|x)∝L(x|θ)h(θ). (11.1.8)
Note that in the right-hand member of this expression, all factors involving con-
stants andxalone (notθ) can be dropped. For illustration, in solving the problem
presented in Example 11.1.1, we simply write
k(θ|x)∝θ
Px
ie−nθθα−^1 e−θ/β
or, equivalently,
k(θ|x)∝θ
Px
i+α−^1 e−θ/[β/(nβ+1)],
0 <θ<∞, and is equal to zero elsewhere. Clearly,k(θ|x) must be a gamma pdf
with parametersα∗=
∑
xi+αandβ∗=β/(nβ+1).
There is another observation that can be made at this point. Suppose that there
exists a sufficient statisticY=u(X) for the parameter so that
L(x|θ)=g[u(x)|θ]H(x),
where nowg(y|θ)isthepdfofY,givenΘ=θ. Then we note that
k(θ|x)∝g[u(x)|θ]h(θ)
because the factorH(x) that does not depend uponθcan be dropped. Thus, if a
sufficient statisticYfor the parameter exists, we can begin with the pdf ofYif we
wish and write
k(θ|y)∝g(y|θ)h(θ), (11.1.9)
where nowk(θ|y is the conditional pdf of Θ given the sufficient statistic) Y=y.In
the case of a sufficient statisticY,wealsouseg 1 (y) to denote the marginal pdf of
Y; that is, in the continuous case,
g 1 (y)=
∫∞
−∞
g(y|θ)h(θ)dθ.
11.1.2 BayesianPointEstimation
Suppose we want a point estimator ofθ. From the Bayesian viewpoint, this really
amounts to selecting a decision functionδ,sothatδ(x) is a predicted value ofθ
(an experimental value of the random variable Θ) when both the computed valuex
and the conditional pdfk(θ|x) are known. Now, in general, how would we predict