Statistical Methods for Psychology

(Michael S) #1
of the original measurements. (Recall that we used SStotalfrom the analysis of variance
when we calculated h^2 .) An alternative would be the square root of MSerrorfrom the analy-
sis of covariance, which would standardize the mean difference in the metric of the ad-
justed scores, which is a bit more difficult to understand. Cortina and Nouri (2000) have
made the sensible suggestion that when the covariate normally varies in the population, as
ours does, we want to include that variability in our estimate of error. This means that we
would use the square root of MSerrorfrom the analysis of variance on the posttest scores. In
that analysis MSerroris 1.392, which is simply the square root of the weighted mean of the
group variances). Then

Injection of the optimal dose of THC (.5 mg) leads to an increase of postinjection activity
by nearly 1 and a quarter standard deviations relative to the control group.

16.7 Interpreting an Analysis of Covariance


Interpreting an analysis of covariance can present certain problems, depending on the
nature of the data and, more important, the design of the experiment. A thorough and read-
able discussion of most of these problems is presented by Huitema (1980). Other impor-
tant sources for consideration of these problems are Anderson (1963), Evans and Anastasio
(1968), Huitema (2005), Lord (1967, 1969), Maxwell and Cramer (1975), Reichardt
(1979), Smith (1957), and Weisberg (1979).
The ideal application for an analysis of covariance is an experiment in which partici-
pants are randomly assigned to treatments (or cells of a factorial design). In that situation,
the expected valueof the covariate mean for each group or cell is the same, and any differ-
ences can be attributed only to chance, assuming that the covariate was measured before
the treatments were applied. In this situation, the analysis of covariance will primarily re-
duce the error term, but it will also, properly, remove any bias in the dependent variable
means caused by chance group differences on the covariate. This was the situation in the
Conti and Musty (1984) study that we have been discussing.
In a randomized experiment in which the covariate is measured afterthe treatment has
been applied and has affected the covariate, interpreting the results of an analysis of
covariance is difficult at best. In this situation the expected values of the group covariate
means are not equal, even though the subjects were assigned randomly. It is difficult to in-
terpret the results of the analysis because you are asking what the groups would have been
like had they not differed on the covariate, when in fact the covariate differences may be an
integral part of the treatment effect. This problem is particularly severe if the covariate was
measured with error (i.e., if it is not perfectly reliable). In this case an alternative analysis,
called the true-score analysis of covariance,may be appropriate if the other interpretive
problems can be overcome. Such an analysis is discussed in Huitema (1980, Chapter 14).
When subjects are not assigned to the treatment groups at random, interpreting the
analysis of covariance can be even more troublesome. The most common example of this
problem is what is called the nonequivalent groups design.In this design, two (or more)
intact groups are chosen (e.g., schools or classrooms of children), a pretest measure is
obtained from subjects in both groups, the treatment is applied to one of the groups, and
the two groups are then compared on some posttest measure. Since participants are not
assigned to the groups at random, we have no basis for assuming that any differences that
exist on the pretest are to be attributed to chance. Similarly, we have no basis for expecting

d=

cN
sN

=


X 32 X 1


1 MSerror

=


3.1719 2 1.7153


1 1.392


=


1.4566


1.1798


=1.23


Section 16.7 Interpreting an Analysis of Covariance 611

true-score
analysis of
covariance


nonequivalent
groups design

Free download pdf