Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

118 Chapter 4:Random Variables and Expectation


when the last equality follows since each of the 10 coupons will (independently) not be
a typeiwith probability^1920. Hence,


E[X]=E[X 1 ]+···+E[X 20 ]= 20

[
1 −

( 19
20

) 10 ]
=8.025 ■

An important property of the mean arises when one must predict the value of a random
variable. That is, suppose that the value of a random variableX is to be predicted. If
we predict thatXwill equalc, then the square of the “error” involved will be (X−c)^2.
We will now show that the average squared error is minimized when we predict thatX
will equal its meanμ. To see this, note that for any constantc


E[(X−c)^2 ]=E[(X−μ+μ−c)^2 ]

=E[(X−μ)^2 +2(μ−c)(X−μ)+(μ−c)^2 ]

=E[(X−μ)^2 ]+2(μ−c)E[X−μ]+(μ−c)^2

=E[(X−μ)^2 ]+(μ−c^2 ) since E[X−μ]=E[X]−μ= 0

≥E[(X−μ)^2 ]

Hence, the best predictor of a random variable, in terms of minimizing its mean square
error, is just its mean.


4.6Variance


Given a random variableXalong with its probability distribution function, it would be
extremely useful if we were able to summarize the essential properties of the mass function
by certain suitably defined measures. One such measure would beE[X], the expected value
ofX. However, whileE[X]yields the weighted average of the possible values ofX, it does
not tell us anything about the variation, or spread, of these values. For instance, while the
following random variablesW,Y, andZhaving probability mass functions determined by


W=0 with probability 1

Y=

{
−1 with probability^12
1 with probability^12

Z=

{
−100 with probability^12
100 with probability^12

all have the same expectation — namely, 0 — there is much greater spread in the possible
values ofYthan in those ofW(which is a constant) and in the possible values ofZthan
in those ofY.

Free download pdf