Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

388 Chapter 9: Regression


Now in many applications it is probably reasonable to suppose that the Yi are
independent random variables with a common variance, and thus,


Var(Y)=Var(Y 1 )+···+Var(Yx/d)
=(x/d)Var(Y 1 ) since Var(Yi)=Var(Y 1 )

=xσ^2 , whereσ^2 =Var(Y 1 )/d

Thus, it would seem that the estimatorsAandBshould be chosen so as to minimize



i

(Yi−A−Bxi)^2
xi

Using the preceding data with the weightswi=1/xi, the least squares Equations 9.8.1 are


104.22=5.34A+ 10 B
277.9= 10 A+ 41 B

which yield the solution


A=12.561, B=3.714

A graph of the estimated regression line 12. 561+3. 714xalong with the data points is
presented in Figure 9.12. As a qualitative check of our solution, note that the regression
line fits the data pairs best when the input levels are small, which is as it should be since
the weights are inversely proportional to the inputs. ■


EXAMPLE 9.8c Consider the relationship betweenY, the number of accidents on a heavily
traveled highway, andx, the number of cars traveling on the highway. After a little thought
it would probably seem to most that the linear model


Y=α+βx+e

would be appropriate. However, as there does not appear to be anya priorireason why
Var(Y) should not depend on the input levelx, it is not clear that we would be justified in
using the ordinary least squares approach to estimateαandβ. Indeed, we will now argue
that a weighted least squares approach with weights 1/xshould be employed — that is, we
should chooseAandBto minimize



i

(Yi−A−Bxi)^2
xi
Free download pdf