Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

386 Chapter 9: Regression


Now let us determine the estimator produced by minimizing the weighted sum of
squares. That is, let us determine the value ofμ— call itμw— that minimizes


(Y 1 −kμ)^2
Var(Y 1 )

+

[Y 2 −(n−k)μ]^2
Var(Y 2 )

Since


Var(Y 1 )=kσ^2 , Var(Y 2 )=(n−k)σ^2

this is equivalent to choosingμto minimize


(Y 1 −kμ)^2
k

+

[Y 2 −(n−k)μ]^2
n−k

Upon differentiating and then equating to 0, we see thatμw, the minimizing value, satisfies


− 2 k(Y 1 −kμw)
k


2(n−k)[Y 2 −(n−k)μw]
n−k

= 0

or


Y 1 +Y 2 =nμw

or


μw=

Y 1 +Y 2
n

That is, the weighted least squares estimator is indeed the preferred estimator
(Y 1 +Y 2 )/n=X. ■


REMARKS


(a)Assuming normally distributed data, the weighted least squares estimators are precisely
the maximum likelihood estimators. This follows because the joint density of the data
Y 1 ,...,Ynis


fY 1 ,...,Yn(y 1 ,...,yn)=

∏n

i= 1

1

2 π(σ/


wi)

e−(yi−α−βxi)

(^2) /(2σ (^2) /wi)



w 1 ...wn
(2π)n/2σn
e−
∑n
i= 1 wi(yi−α−βxi)^2 /2σ^2
Consequently, the maximum likelihood estimators ofαandβare precisely the values of
αandβthat minimize the weighted sum of squares
∑n
i= 1 wi(yi−α−βxi)^2.

Free download pdf