Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
11.2 Basic Concepts of Probability Theory 641

Then the joint distribution functionFY(y) by definition, is given by,


FY(y) =P(Y≤y)

=


x 1


x 2

·· ·


xn

fX 1 ,X 2 ,...,Xn(x 1 , x 2 ,... , xn)dx 1 dx 2 · · · dxn

g(x 1 ,x 2 ,...,xn)≤y (11.32)

where the integration is to be done over the domain of the n-dimensional
(X 1 , X 2 ,... , Xn) pace in which the inequalitys g(x 1 , x 2 ,... , xn) ≤yis satisfied. By
differentiating Eq. (11.32), we can get the density function ofy,fY(y).
As in the case of a function of a single random variable, the mean and variance
of a function of several random variables are given by


E(Y )=E[g(X 1 , X 2 ,... , Xn)]=

∫∞

−∞

· · ·

∫∞

−∞

g(x 1 , x 2 ,... ,xn)fX 1 ,X 2 ,...,Xn

×(x 1 , x 2 ,... , xn)dx 1 dx 2 · · · dxn (11.33)

and


Var(Y )=

∫∞

−∞

· · ·

∫∞

−∞

[ (xg 1 , x 2 ,... , xn)−Y]
2

×fX 1 ,X 2 ...Xn(x 1 , x 2 ,... , xn)dx 1 dx 2 · · · dxn (11.34)

In particular, ifYis a linear function of two random variablesX 1 andX 2 , we have


Y=a 1 X 1 +a 2 X 2

wherea 1 anda 2 are constants. In this case


E(Y )=

∫∞

−∞

∫∞

−∞

(a 1 x 1 +a 2 x 2 )fX 1 ,X 2 (x 1 , x 2 )dx 1 dx 2

=a 1

∫ ∞

−∞

x 1 fX 1 (x 1 )dx 1 +a 2

∫∞

−∞

x 2 fX 2 (x 2 )dx 2

=a 1 E(X 1 )+a 2 E(X 2 ) (11.35)

Thus the expected value of a sum is given by the sum of the expected values. The
variance ofYcan be obtained as


Var(Y )=E[(a 1 X 1 +a 2 X 2 ) −(a 1 X+a 2 X 2 )]^2

=E[a 1 (X 1 −X 1 )+a 2 (X 2 −X 2 )]^2

=E[a^21 (X 1 −X 1 )^2 + 2 a 1 a 2 (X 1 −X 1 )(X 2 −X 2 )+a 22 (X 2 −X 2 )^2 ] (11.36)

Noting that the expected values of the first and the third terms are variances, whereas
that the middle term is a covariance, we obtain


Var(Y )=a 12 Var (X 1 )+a^22 Var (X 2 )+ 2 a 1 a 2 Cov (X 1 , X 2 ) (11.37)
Free download pdf