Anon

(Dana P.) #1

Simple Linear Regression 23


small, the fit might not be a too overwhelming one. Before introducing this
measure formally, we present some initial considerations.
Consider the variance of the observations y by analyzing the total sum
of squares of y around its means as given by


(^) SST=−()


∑yyi
i
n 2
1


(2.8)

The total sum of squares (denoted by SST) can be decomposed into the sum
of squares explained by the regression (denoted by SSR) and the sum of
squared errors (denoted by SSE). That is,^4


(^) SSTSSRSSE=+ (2.9)
with
(^) SSR=−()


∑yyˆi
i
n 2
1


(2.10)

and


(^) SSE=−()==−


∑∑yyii∑eyi ay∑
i
n
i
n
i
i
n
i
i
n
ˆ
(^22)
1
2
11


−−

=

bx∑ iiy
i

n

1

where yˆ is the estimated value for y from the regression.
The SSR is that part of the total sum of squares that is explained by the
regression term f(x). The SSE is the part of the total sum of squares that is
unexplained or equivalently the sum of squares of the errors. Now, the coef-
ficient of determination is defined by^5


R

fx
s

n

abxy

s

n

y

y

i
i

n

y

i
2
2

2
1
2

11

==

()+−

=


=
var(()) ∑ ˆ yy

n

yy

i

n

i
i

n

()

()−

==


=

=

=



2
1
2
1

1

SSR

SST

SST SSE

SST

11 −

SSE

SST

(^4) The notation explaining the R (^2) differs. In some books, SSR denotes sum of squares
of the residuals (where R represents residuals, i.e., the errors) and SSE denotes sum
of squares explained by the regression (where E stands for explained). Notice that
the notation is just the opposite of what we used.
(^5) Note that the average means of y and yˆ are the same (i.e., they are both equal to y).

Free download pdf