This completes the proof. The theorem stated above is a special case of the
Gauss–Markov theorem.
Another interesting comparison is that between the least-square estimators
for and and their maximum likelihood estimators with an assigned dis-
tribution for random variable Y. It is left as an exercise to show that the
maximum likelihood estimators for and are identical to their least-square
counterparts under the added assumption that Y is normally distributed.
11.1.3 UNBIASED ESTIMATOR FOR^2
As we have shown, the method of least squares does not lead to an estimator
for variance^2 of Y, which is in general also an unknown quantity in linear
regression models. In order to propose an estimator for^2 , an intuitive choice is
where coefficient k is to be chosen so that^ is unbiased. In order to carry out
the expectation of , we note that [see Equation (11.7)]
H ence, it follows that
since [see Equation (11.8)]
Upon taking expectations term by term, we can show that
Linear Models and Linear Regression 345
c^2 k
Xn
i 1
Yi
A^Bx^ i^2 ;
11 : 28
c
c
YiA^Bx^iYi
YB^xBx^i
YiYB^
xix:
11 : 29
Xn
i 1
YiA^Bx^i^2
Xn
i 1
YiY^2 B^^2
Xn
i 1
xix^2 ;
11 : 30
Xn
i 1
xix
YiYB^
Xn
i 1
xix^2 :
11 : 31
Efc^2 gkE
Xn
i 1
YiY^2 B^^2
Xn
i 1
xix^2
)
k
n 2 ^2 :
2
2