Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

*9.10Multiple Linear Regression 401


Since Var(Yr)=σ^2 , we see that


Cov(Bi− 1 ,Bj− 1 )=σ^2

∑n

r= 1

CirCjr (9.10.4)

=σ^2 (CC′)ij

where (CC′)ijis the element in rowi, columnjofCC′.
If we now let Cov(B) denote the matrix of covariances — that is,


Cov(B)=




Cov(B 0 ,B 0 ) ··· Cov(B 0 ,Bk)
..
.

..
.
Cov(Bk,B 0 ) ··· Cov(Bk,Bk)




then it follows from Equation 9.10.4 that


Cov(B)=σ^2 CC′ (9.10.5)

Now


C′=

(
(X′X)−^1 X′

)′

=X

(
(X′X)−^1

)′

=X(X′X)−^1

where the last equality follows since (X′X)−^1 is symmetric (sinceX′Xis) and so is equal
to its transpose. Hence


CC′=(X′X)−^1 X′X(X′X)−^1

=(X′X)−^1

and so we can conclude from Equation 9.10.5 that


Cov(B)=σ^2 (X′X)−^1 (9.10.6)

Since Cov(Bi,Bi)=Var(Bi), it follows that the variances of the least squares estimators
are given byσ^2 multiplied by the diagonal elements of (X′X)−^1.

Free download pdf