Advanced High-School Mathematics

(Tina Meador) #1

320 CHAPTER 6 Inferential Statistics


P(Y =y) =

∑∞
j=1

P(Y =y|X=xi)P(X=xi). (6.2)

Having noted this, we now proceed:


μX+Y =

∑∞
i=1

∑∞
j=1

(xi+yj)P(X=xi andY =yj)

=

∑∞
i=1

∑∞
j=1

xiP(X=xi andY =yj)

+

∑∞
i=1

∑∞
j=1

yjP(X=xiandY =yj)

=

∑∞
i=1

∑∞
j=1

xiP(X=xi|Y =yj)P(Y =yj)

+

∑∞
i=1

∑∞
j=1

yjP(Y =yj|X=xi)P(X=xi)

=

∑∞
i=1

xi

∑∞
j=1

P(X=xi|Y =yj)P(Y =yj)

+

∑∞
j=1

yj

∑∞
i=1

P(Y =yj|X=xi)P(X=xi)

=

∑∞
i=1

xiP(X=xi) +

∑∞
j=1

yiP(Y =yi) by (6.1) and (6.2)

= μX+μY,

proving that


E(X+Y) = E(X) +E(Y). (6.3)

Next, note that ifXis a random variable and ifaandbare constants,
then it’s clear that E(aX) =aE(X); from this we immediately infer
(sincebcan be regarded itself as a random variable with meanb) that


E(aX+b) = aE(X) +b.
Free download pdf