Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
674 Bayesian Statistics

Theorem 11.3.1.Suppose we generate random variables by the following algo-
rithm:


  1. GenerateY∼fY(y),

  2. GenerateX∼fX|Y(x|Y).


ThenXhas pdffX(x).


Proof:To avoid confusion, letTbe the random variable generated by the algorithm.
We need to show thatThas pdffX(x). Probabilities of events concerningTare
conditional onY andaretakenwithrespecttothecdfFX|Y. Recall that proba-
bilities can always be written as expectations of indicator functions and, hence, for
events concerningT, are conditional expectations. In particular, for anyt∈R,


P[T≤t]=E[FX|Y(t)]

=

∫∞

−∞

[∫t

−∞

fX|Y(x|y)dx

]
fY(y)dy

=

∫t

−∞

[∫∞

−∞

fX|Y(x|y)fY(y)dy

]
dx

=

∫t

−∞

[∫∞

−∞

fX,Y(x, y)dy

]
dx

=

∫t

−∞

fX(x)dx.

Hence the random variable generated by the algorithm has pdffX(x), as was to be
shown.


In the situation of this theorem, suppose we want to determineE[W(X)], for
some functionW(x), whereE[W^2 (X)]<∞. Using the algorithm of the theorem,
generate independently the sequence (Y 1 ,X 1 ),(Y 2 ,X 2 ),...,(Ym,Xm), for a specified
value ofm,whereYiis drawn from the pdffY(y)andXiis generated from the pdf
fX|Y(x|Y). Then by the Weak Law of Large Numbers,


W=

1
m

∑m

i=1

W(Xi)
P

∫∞

−∞

W(x)fX(x)dx=E[W(X)].

Furthermore, by the Central Limit Theorem,



m(W−E[W(X)]) converges in
distribution to aN(0,σ^2 W) distribution, whereσ^2 W=Var(W(X)). Ifw 1 ,w 2 ,...,wm
is a realization of such a random sample, then an approximate (1−α)100% (large
sample) confidence interval forE[W(X)] is


w±zα/ 2
sW

m

, (11.3.4)

wheres^2 W=(m−1)−^1


∑m
i=1(wi−w)

(^2).
To set ideas, we present the following simple example.

Free download pdf