346 Chapter 11 Simple Analysis of Variance
Percent
Reduction in
Error (PRE)
omega-squared
(v^2 )
points out the fact that it can be treated as any other squared correlation coefficient, indi-
cating the proportion of the variation accounted for by the independent variable. For
Eysenck’s data, 44.7% of the variation in recall scores can be attributed to differences in
the instructions given to the groups, and therefore, presumably, to the depth to which the
items were processed. This is an unusually large amount of explained variation, reflecting
the extreme nature of group differences.
There is another way to look at that derives directly from the last formula and
which has been recently viewed as a desirable feature of any measure of the magni-
tude of effect. In the terminology popularized by Judd and McClelland (1989) is
the Percent Reduction in Error (PRE).If we did not take group membership into
account, the error associated with our recall scores would be , the variability of
all 50 observations. But when we know which group a subject is in, the error associ-
ated with our predictions is reduced to , the average variation within groups.
But the difference between and is. Thus divided by
is the percentage by which the error of our prediction has been reducedby con-
sidering group membership. In terms of our example, without attending to group
membership we had 786.82 units of error. After attending to group membership we
only have 435.30 units of error. Thus we have reduced our error by 786.82 2 435.30 5
351.52 points, or by 351.52/786.82 5 44.7%.
It is important to realize that assumes that the true regression line passes through
the individual treatment means. When the data are treated as a population, the assump-
tion is correct. When the data are treated as a sample from some larger population, how-
ever, bias is introduced. Since these means are really sample means, they are subject to
sampling error, and will be biased upward—whatever the trueregression line
through the population means, it will probably not pass exactly through each sample
mean. Although all measures we discuss will be biased, is the most biased. Thus,
although it has the advantage of simplicity and is intuitively appealing, we will gener-
ally prefer to use a less biased estimate when our interest is in making general state-
ments about our variables. If we are interested in making statements only about our
particular set of data, or if we want a rough idea of the magnitude of the effect, then
is a perfectly good measure. Moreover, and other, less biased, measures converge as
sample sizes increase.
Omega-Squared (v^2 )
An alternative, and for many purposes better, method of assessing the magnitude of the ex-
perimental effect with balanced (equal ns) or nearly balanced designs is omega-squared ( ).
This statistic has been discussed by Hays (1994) and developed extensively by Fliess
(1969), Vaughan and Corballis (1969), and Dodd and Schultz (1973). The derivation of
is based on the underlying structural model that we discussed earlier, and there are two dif-
ferent formulae for , depending on whether the independent variable is fixed or random.
A random independent variable is rare in one-way designs so we will ignore that version
here, though it will become meaningful in the more complex designs in Chapters 13 and 14.
I will also omit any derivation here, but the interested reader can find a derivation in the
earlier editions of this book (Howell, 1997) and in Winer (1971).
For the fixed-model analysis of variance, a relatively unbiased estimate of the magnitude
of experimental effect is given by
v^2 =
SStreat 2 (k 2 1)MSerror
SStotal 1 MSerror
v^2
v^2
V^2
h^2
h^2
h^2
h^2
h^2
SStotal
SStotal SSerror SStreatment SStreatment
SSerror
SStotal
h^2
h^2