Fundamentals of Probability and Statistics for Engineers

(John Hannent) #1

and (11.23) with the Cramr–Rao lower bounds defined in Section 9.2.2. In
order to evaluate these lower bounds, a probability distribution of Y must be
made available. Without this knowledge, however, we can still show, in Theorem
11.2, that the least squares technique leads to linear unbiased minimum-variance
estimators for and ; that is, among all unbiased estimators which are linear
inY, least-square estimators have minimum variance.


Theorem 11.2:let random variable Y be defined by Equation (11.4). G iven
asample(x 1 ,Y 1 ), (x 2 ,Y 2 ),... , (n,xYn) of Y with its associated x values, least-
given by Equation (11.17) are minimum variance
linear unbiased estimator sf or and , respectively.


Proof of Theorem 11.2:the proof of this important theorem is sketched
below with use of vector–matrix notation.
Consider a linear unbiased estimator of the form


We thus wish to prove that G 0 if*is to be minimum variance.
The unbiasedness requirement leads to, in view of Equation(11.19),


Consider now the covariance matrix


U pon using Equations (11.19), (11.24), and (11.25) and expanding the covari-
ance, we have


Now, in order to minimize the variances associated with the components of ,
we must minimize each diagonal element of GGT. Since the iith diagonal
element of GGT is given by


where gij is the ijth element of G, we must have


and we obtain


344 Fundamentals of Probability and Statistics for Engineers


squareestimators and


Â

A^ B^

Q*ˆ‰…CTC†^1 CT‡GŠY: … 11 : 24 †

ˆ Q*

GCˆ 0 : … 11 : 25 †

covfQ*gˆEf…Q*q†…Q*q†Tg: … 11 : 26 †

covfQ*gˆ^2 ‰…CTC†^1 ‡GGTŠ:

Q*

…GGT†iiˆ

Xn

jˆ 1

g^2 ij;

gijˆ 0 ; for alliandj:

Gˆ 0 : … 11 : 27 †

e
Free download pdf