Statistical Methods for Psychology

(Michael S) #1
IN OUR DISCUSSION OF THE ANALYSIS OF VARIANCE, we have concerned ourselves with exper-
imental designs that have different subjects in the different cells. More precisely, we have
been concerned with designs in which the cells are independent, or uncorrelated. (Under
the assumptions of the analysis of variance, independentand uncorrelatedare synonymous
in this context.) In this chapter we are going to be concerned with the problem of analyzing
data where some or all of the cells are not independent. Such designs are somewhat more
complicated to analyze, and the formulae become more complex. Most, or perhaps even
all, readers will approach the problem using computer software such as SPSS or SAS.
However, to understand what you are seeing, you need to know something about how you
would approach the problem by hand; and that leads to lots and lots of formulae. I urge you
to treat the formulae lightly, and not feel that you have to memorize any of them. This
chapter needs to be complete, and that means we have to go into the analysis at some depth,
but remember that you can always come back to the formulae when you need them, and
don’t worry about the calculations too much until you do need them.
If you think of a typical one-way analysis of variance with different subjects serving
under the different treatments, you would probably be willing to concede that the correla-
tions between treatments 1 and 2, 1 and 3, and 2 and 3 have an expectation of zero.

Treatment 1 Treatment 2 Treatment 3

However, suppose that in the design diagrammed here the same subjects were used in
all three treatments. Thus, instead of 3nsubjects measured once, we have nsubjects meas-
ured three times. In this case, we would be hard put to believe that the intercorrelations of
the three treatments would have expectancies of zero. On the contrary, the better subjects
under treatment 1 would probably also perform well under treatments 2 and 3, and the
poorer subjects under treatment 1 would probably perform poorly under the other condi-
tions, leading to significant correlations among treatments.
This lack of independence among the treatments would cause a serious problem if it
were not for the fact that we can separate out, or partition,and remove the dependence im-
posed by repeated measurements on the same subjects. (To use a term that will become
much more familiar in Chapter 15, we can say that we are partialling outeffects that cause
the dependence.) In fact, one of the main advantages of repeated-measures designsis that
they allow us to reduce overall variability by using a common subject pool for all treat-
ments, and at the same time allow us to remove subject differences from our error term,
leaving the error components independent from treatment to treatment or cell to cell.
As an illustration, consider the highly exaggerated set of data on four subjects over
three treatments presented in Table 14.1. Here the dependent variable is the number of tri-
als to criterion on some task. If you look first at the treatment means, you will see some
slight differences, but nothing to get too excited about. There is so much variability within
each treatment that it would at first appear that the means differ only by chance. But look
at the subject means. It is apparent that subject 1 learns quickly under all conditions, and
that subjects 3 and 4 learn remarkably slowly. These differences among the subjects are
producing most of the differences withintreatments, and yet they have nothing to do with
the treatment effect. If we could remove these subject differences we would have a better
(and smaller) estimate of error. At the same time, it is the subject differences that are creat-
ing the high positive intercorrelations among the treatments, and these too we will partial
out by forming a separate term for subjects.

X 1 n X 2 n X 3 n

Á Á Á


X 12 X 22 X 32


X 11 X 21 X 31


462 Chapter 14 Repeated-Measures Designs


partition


partialling out


repeated-
measures designs

Free download pdf