Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
2.6. Extension to Several Random Variables 141

Then

E[A 1 W 1 +A 2 W 2 ]=A 1 E[W 1 ]+A 2 E[W 2 ] (2.6.11)
E[A 1 W 1 B]=A 1 E[W 1 ]B. (2.6.12)

Proof:Because of the linearity of the operatorEon random variables, we have for
the (i, j)th components of expression (2.6.11) that


E

[m

s=1

a 1 isW 1 sj+

∑m

s=1

a 2 isW 2 sj

]
=

∑m

s=1

a 1 isE[W 1 sj]+

∑m

s=1

a 2 isE[W 2 sj].

Hence by (2.6.10), expression (2.6.11) is true. The derivation of expression (2.6.12)
follows in the same manner.

LetX=(X 1 ,...,Xn)′be ann-dimensional random vector, such thatσ^2 i =
Var(Xi)<∞.ThemeanofXisμ=E[X] and we define itsvariance-covariance
matrixas


Cov(X)=E[(X−μ)(X−μ)′]=[σij], (2.6.13)

whereσiidenotesσ^2 i. As Exercise 2.6.8 shows, theith diagonal entry of Cov(X)is
σi^2 =Var(Xi)andthe(i, j)th off diagonal entry is Cov(Xi,Xj).


Example 2.6.3(Example 2.5.6, Continued).In Example 2.5.6, we considered
the joint pdf


f(x, y)=

{
e−y 0 <x<y<∞
0elsewhere,

and showed that the first two moments are

μ 1 =1,μ 2 =2
σ^21 =1,σ^22 = 2 (2.6.14)
E[(X−μ 1 )(Y−μ 2 )] = 1.

LetZ=(X, Y)′. Then using the present notation, we have


E[Z]=

[
1
2

]
and Cov(Z)=

[
11
12

]
.

Two properties of Cov(Xi,Xj) needed later are summarized in the following
theorem:


Theorem 2.6.3. LetX=(X 1 ,...,Xn)′be ann-dimensional random vector, such
thatσ^2 i=σii=Var(Xi)<∞.LetAbe anm×nmatrix of constants. Then


Cov(X)=E[XX′]−μμ′ (2.6.15)
Cov(AX)=ACov(X)A′. (2.6.16)
Free download pdf