Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

406 Chapter 9: Regression


That is, it is an unbiased estimator. Also, using the fact that the variance of a random
variable is equal to the covariance between that random variable and itself, we see that


Var



∑k

i= 0

xiBi


=Cov



∑k

i= 0

xiBi,

∑k

j= 0

xjBj


 (9.10.9)

=

∑k

i= 0

∑k

j= 0

xixjCov(Bi,Bj)

If we letxdenote the matrix


x=





x 0
x 1
..
.
xk





then, recalling that Cov(Bi,Bj)/σ^2 is the element in the (i+1)st row and (j+1)st column
of (X′X)−^1 , we can express Equation 9.10.9 as


Var



∑k

i= 0

xiBi


=x′(X′X)−^1 xσ^2 (9.10.10)

Using Equations 9.10.8 and 9.10.10, we see that


∑k
i= 0

xiBi−

∑k
i= 0

xiβi

σ


x′(X′X)−^1 x

∼N(0, 1)

If we now replaceσby its estimator



SSR/(n−k−1) we obtain, by the usual argument,
that


∑k
i= 0

xiBi−

∑k
i= 0

xiβi

SSR
(n−k−1)


x′(X′X)−^1 x

∼tn−k− 1

which gives rise to the following confidence interval estimator of


∑k
i= 0 xiβi.

Confidence Interval Estimate ofE[Y|x] =

∑k
i=0xiβi,(x^0 ≡1)

A 100(1−a) percent confidence interval estimate of


∑k
i= 0 xiβiis given by
∑k

i= 0

xibi±


ssr
(n−k−1)


x′(X′X)−^1 x ta/2,n−k− 1
Free download pdf