thenEis a zero-mean random vector with covariance matrix^2 I,Ibeing
the n n identity matrix.
The mean and variance of estimator are now easily determined. In view of
Equations (11.17) and (11.19), we have
for and , respectively, are unbiased.
The covariance matrix associated with is given by, as seen from Equation
(11.17),
But cov Y^2 I; we thus have
The diagonal elements of the matrix in Equation (11.21) give the variances of
. In terms of the elements of C, we can write
It isseen that thesevariancesdecreaseassamplesizenincreases, accordingto 1/n.
Thus, it follows from our discussion in Chapter 9 that these estimators are consistent –
reduced byselectingthexi in such a waythat thedenominator ofEquation (11.23)is
maximized; this can be accomplished by spreading the xi as far apart as possible. In
Example 11.1, for example, assuming that we are free to choose the values of xi,the
quality of is improved if one-half of the x readings are taken at one extreme of the
temperature range and the other half at the other extreme. H owever, the sampling
strategy for minimizing var( ) for a fixed n is to makex as close to zero as possible.
Are the variances given by Equations (11.22) and (11.23) minimum variances
associated with any unbiased estimators for and? An answer to this import-
ant question can be found by comparing the results given by Equations (11.22)
Linear Models and Linear Regression 343
a desirable property. We further note that, for a fixed n, the vari ance of canbe
and
H ence,estimators and
L^2 ,
Q^
EfQ^g
CTC^1 CTEfYg
CTC^1 CTCqEfEg
CTC^1
CTCqq:
11 : 20
A^ B^
Q^
covfQ^gEf
Q^q
Q^qTg
CTC^1 CTcovfYgC
CTC^1 :
f g
covfQ^g^2
CTC^1 CTC
CTC^1 ^2
CTC^1 :
11 : 21
A^ B^
varfA^g^2
Xn
i 1
x^2 i
"
n
Xn
i 1
xix^2
"# 1
; 11 : 22
varfB^g^2
Xn
i 1
xix^2