Advanced High-School Mathematics

(Tina Meador) #1

SECTION 6.3 Parameters and Statistics 369


This proves the assertion made on page 365. Next, we have


E(X^2 ) =

∫∞
0 xfX

(^2) (x)dx


1

2

∫∞
0


xfX(x)dx −

1

2

∫∞
0


xfX(−


x)dx

=

∫∞
−∞u

(^2) fX(u)dx


∫∞
−∞x
(^2) fX(x)dx
Finally,
Var(X) =
∫∞
−∞(x−μ)
(^2) fX(x)dx


∫∞
−∞(x
(^2) − 2 xμ+μ (^2) )fX(x)dx


∫∞
−∞
x^2 fX(x)dx− 2 μ
∫∞
−∞
xfX(x)dx+μ^2
∫∞
−∞
fX(x)dx
= E(X^2 )−μ^2 ,
exactly as for discrete random variables (page 321).
We need one final theoretical result concerning variance. Assume
that we take two independent measurements X andY from a given
population both having meanμ. What is the variance ofX+Y? This
will require the two results sketched on page 366, namely
(i) thatE(X+Y) =E(X) +E(Y),whether or notXandY, and
(ii) that ifX andY are independent, thenE(XY) =E(X)E(Y).
Using these two facts, we can proceed as follows:
Var(X+Y) = E((X+Y − 2 μ)^2 )
= E(X^2 +Y^2 + 2XY − 4 μX− 4 μY + 4μ^2 )
= E(X^2 ) +E(Y^2 ) + 2μ^2 − 4 μ^2 − 4 μ^2 + 4μ^2
= E(X^2 )−μ^2 +E(Y^2 )−μ^2 = Var(X) + Var(Y).

Free download pdf