Statistical Methods for Psychology

(Michael S) #1
conditional distributionof Ybecause it is the distribution of Yscores for those cases that
meet a certain condition with respect to X. We say that these standard deviations are condi-
tional on Xbecause we calculate them from Yvalues corresponding to specific values of X.
On the other hand, our usual standard deviation of is not conditional on Xbecause we
calculate it using all values of Y, regardless of their corresponding Xvalues.
One way to obtain the standard error of estimate would be to calculate for each ob-
servation and then to find directly, as has been done in Table 9.3. Finding the standard
error using this technique is hardly the most enjoyable way to spend a winter evening. For-
tunately, a much simpler procedure exists. It not only provides a way of obtaining the stan-
dard error of estimate, but also leads directly into even more important matters.

r^2 and the Standard Error of Estimate


In much of what follows, we will abandon the term variancein favor of sums of squares
(SS). As you should recall, a variance is a sum of squared deviations from the mean (gener-
ally known as a sum of squares) divided by the degrees of freedom. The problem with vari-
ances is that they are not additive unless they are based on the same df. Sums of squares are
additive regardless of the degrees of freedom and thus are much easier measures to use.^10
We earlier defined the residual or error variance as

With considerable algebraic manipulation, it is possible to show

sY#X=sY
B

(1 2 r^2 )

N 21


N 22


s^2 Y#X= a

(Y 2 YN)^2


N 22


=


SSresidual
N 22

sY#X

YN


Y(sY)

260 Chapter 9 Correlation and Regression


Table 9.3 Direct calculation of the standard error of estimate
Subject Stress (X) lnSymptoms (Y) Y–
1 30 4.60 4.557 0.038
2 27 4.54 4.532 0.012
3 9 4.38 4.378 0.004
4 20 4.25 4.472 2 0.223
5 3 4.61 4.326 0.279
6 15 4.69 4.429 0.262
7 5 4.13 4.343 2 0.216
8 10 4.39 4.386 0.008
9 23 4.30 4.498 2 0.193
10 34 4.80 4.592 0.204
oo o o o

s^2 Y#X= sY#X= 1 0.030=0.173

g(Y 2 YN)^2
N 22

=


3.128


105


=0.030


g(Y 2 YN)^2 =3.128

g(Y 2 YN)= 0

YN YN

(^10) Later in the book when I wish to speak about a variance-type measure but do not want to specify whether it is a
variance, a sum of squares, or something similar, I will use the vague, wishy-washy term variation.
conditional
distribution

Free download pdf