Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
656 Bayesian Statistics

11.1.1 Prior and Posterior Distributions

We now describe the Bayesian approach to the problem of estimation. This approach
takes into account any prior knowledge of the experiment that the statistician has
and it is one application of a principle of statistical inference that may be called
Bayesian statistics. Consider a random variableXthat has a distribution of
probability that depends upon the symbolθ,whereθis an element of a well-defined
set Ω. For example, if the symbolθis the mean of a normal distribution, Ω may
be the real line. We have previously looked uponθas being a parameter, albeit
an unknown parameter. Let us now introduce a random variable Θ that has a
distribution of probability over the set Ω; and just as we look uponxas a possible
value of the random variableX, we now look uponθas a possible value of the
random variable Θ. Thus, the distribution ofXdepends uponθ, an experimental
value of the random variable Θ. We denote the pdf of Θ byh(θ)andwetake
h(θ)=0whenθis not an element of Ω. The pdfh(θ) is called thepriorpdf of Θ.
Moreover, we now denote the pdf ofXbyf(x|θ) since we think of it as a conditional
pdf ofX,givenΘ=θ. For clarity in this chapter, we use the following summary
of this model:


X|θ ∼ f(x|θ)
Θ ∼ h(θ). (11.1.1)

Suppose thatX 1 ,X 2 ,...,Xnis a random sample from the conditional distri-
bution ofXgiven Θ =θwith pdff(x|θ). Vector notation is convenient in this
chapter. LetX′=(X 1 ,X 2 ,...,Xn)andx′=(x 1 ,x 2 ,...,xn). Thus we can write
the joint conditional pdf ofX,givenΘ=θ,as


L(x|θ)=f(x 1 |θ)f(x 2 |θ)···f(xn|θ). (11.1.2)

Thus the joint pdf ofXand Θ is


g(x,θ)=L(x|θ)h(θ). (11.1.3)

If Θ is a random variable of the continuous type, the joint marginal pdf ofXis
given by


g 1 (x)=

∫∞

−∞

g(x,θ)dθ. (11.1.4)

If Θ is a random variable of the discrete type, integration would be replaced by
summation. In either case, the conditional pdf of Θ, given the sampleX,is


k(θ|x)=

g(x,θ)
g 1 (x)

=

L(x|θ)h(θ)
g 1 (x)

. (11.1.5)


The distribution defined by this conditional pdf is called theposterior distribu-
tionand (11.1.5) is called theposterior pdf. The prior distribution reflects the
subjective belief of Θ before the sample is drawn, while the posterior distribution is
the conditional distribution of Θ after the sample is drawn. Further discussion on
these distributions follows an illustrative example.

Free download pdf