Fundamentals of Probability and Statistics for Engineers

(John Hannent) #1

This completes the proof. The theorem stated above is a special case of the
Gauss–Markov theorem.


Another interesting comparison is that between the least-square estimators
for and and their maximum likelihood estimators with an assigned dis-
tribution for random variable Y. It is left as an exercise to show that the
maximum likelihood estimators for and are identical to their least-square
counterparts under the added assumption that Y is normally distributed.


11.1.3 UNBIASED ESTIMATOR FOR^2

As we have shown, the method of least squares does not lead to an estimator
for variance^2 of Y, which is in general also an unknown quantity in linear
regression models. In order to propose an estimator for^2 , an intuitive choice is


where coefficient k is to be chosen so that^ is unbiased. In order to carry out
the expectation of , we note that [see Equation (11.7)]


H ence, it follows that


since [see Equation (11.8)]


Upon taking expectations term by term, we can show that


Linear Models and Linear Regression 345








c^2 ˆk

Xn

iˆ 1

‰Yi…A^‡Bx^ i†Š^2 ; … 11 : 28 †

c
c

YiA^Bx^iˆYi…YB^x†Bx^i

ˆ…YiY†B^…xix†:

… 11 : 29 †

Xn

iˆ 1

…YiA^Bx^i†^2 ˆ

Xn

iˆ 1

…YiY†^2 B^^2

Xn

iˆ 1

…xix†^2 ; … 11 : 30 †

Xn

iˆ 1

…xix†…YiY†ˆB^

Xn

iˆ 1

…xix†^2 : … 11 : 31 †

Efc^2 gˆkE

Xn

iˆ 1

…YiY†^2 B^^2

Xn

iˆ 1

…xix†^2

)

ˆk…n 2 †^2 :

2
2
Free download pdf