Fundamentals of Probability and Statistics for Engineers

(John Hannent) #1

thenEis a zero-mean random vector with covariance matrix^2 I,Ibeing
the n n identity matrix.
The mean and variance of estimator are now easily determined. In view of
Equations (11.17) and (11.19), we have


for and , respectively, are unbiased.
The covariance matrix associated with is given by, as seen from Equation
(11.17),


But cov Y^2 I; we thus have


The diagonal elements of the matrix in Equation (11.21) give the variances of


. In terms of the elements of C, we can write


It isseen that thesevariancesdecreaseassamplesizenincreases, accordingto 1/n.
Thus, it follows from our discussion in Chapter 9 that these estimators are consistent –


reduced byselectingthexi in such a waythat thedenominator ofEquation (11.23)is
maximized; this can be accomplished by spreading the xi as far apart as possible. In
Example 11.1, for example, assuming that we are free to choose the values of xi,the
quality of is improved if one-half of the x readings are taken at one extreme of the
temperature range and the other half at the other extreme. H owever, the sampling
strategy for minimizing var( ) for a fixed n is to makex as close to zero as possible.
Are the variances given by Equations (11.22) and (11.23) minimum variances
associated with any unbiased estimators for and? An answer to this import-
ant question can be found by comparing the results given by Equations (11.22)


Linear Models and Linear Regression 343


a desirable property. We further note that, for a fixed n, the vari ance of canbe


and

H ence,estimators and


Lˆ^2 ,

Q^

EfQ^gˆ…CTC†^1 CTEfYg
ˆ…CTC†^1 CT‰Cq‡EfEgŠ
ˆ…CTC†^1 …CTC†qˆq:

… 11 : 20 †

A^ B^

Q^

covfQ^gˆEf…Q^q†…Q^q†Tg
ˆ…CTC†^1 CTcovfYgC…CTC†^1 :

f gˆ

covfQ^gˆ^2 …CTC†^1 CTC…CTC†^1 ˆ^2 …CTC†^1 : … 11 : 21 †

A^ B^

varfA^gˆ^2

Xn

iˆ 1

x^2 i

"

n

Xn

iˆ 1

…xix†^2

"# 1

; … 11 : 22 †

varfB^gˆ^2

Xn

iˆ 1

…xix†^2

"# 1

: … 11 : 23 †

B^

^

A^
Free download pdf