Advanced High-School Mathematics

(Tina Meador) #1

SECTION 6.3 Parameters and Statistics 367


E(X 1 +X 2 +···+Xk) =E(X 1 ) +E(X 2 ) +···+E(Xk).

If the random variablesXandY are independent, then we may write
the density functionfXY(x,y) as a product: fXY(x,y) = fX(x)fY(y),
from which it follows immediately that


E(XY) = E(X)E(Y), whereXandY are independent.

In particular, this shows the following very important result. Assume
that we are to taken independent samples from a given population
having meanμ. IfXdenotes the average of these samples, thenXis a
itself a random variable and


X =

X 1 +X 2 +···+Xn
n

,

whereX 1 , X 2 , ...,Xnare independent random variables from this pop-
ulation. We have, therefore, that


E(X) =

E(X 1 ) +E(X 2 ) +···+E(Xn)
n
= μ.

We now turn our attention to variance. However, a couple of pre-
liminary observations are in order. First of all, letX be a continuous
random variable, let a be a real constant, and set Y = X+a. We
wish first to compare the density functions fY and fX. Perhaps it’s
already obvious thatfY(x) =fX(x−a), but a formal proof might be
instructive. We have


∫t


−∞fY(x)dx=P(Y ≤t) =P(X+a≤t) =P(X≤t−a) =

∫t−a
−∞ fX(x)dx.

But a simple change of variable shows that

Free download pdf