We divide by (k– 1) instead of k to get the average because with a fixed variable the sum
of the deviations from the grand mean must be 0. We have only (k 2 1) dffrom groups, and
that is our divisor.
The RMSSE is a logical measure of the effect size because it is a direct measure of the
differences of the group means that has been standardized by dividing by the standard de-
viation. The only change that we will make in this measure is to replace parameters (such
as mand s) by the corresponding statistics.
For the Eysenck study on recall as a function of depth of processing the means were
Counting Rhyming Adjective Imagery Intention Grand Mean
7.0 6.9 11 13.4 12 10.06
The sum of the squared deviations from the grand mean is 29.5648 and the square root
of MSerroris 3.1101. Thus
Thus the group means differ, on average, by nearly 9/10 of a standard deviation from the
grand mean, which is a considerable difference.
We can go one step further, which will give us more information about the kind of an
effect we have found. Steiger (2004) discusses setting confidence limits on this measure.
Steiger provides free software (see his paper) and Kelley (2008) provides similar software
written as functions for R that do even more. Using such software we can show that the
confidence limits on RMSSE for the Eysenck experiment are .526 d 1.283. (I used d
in this expression because we are estimating a parameter.) The important thing about this
result is that the lower limit on our 95% confidence interval is still greater than .50, mean-
ing that we have a lower limit of over one half of a standard deviation as the average group
difference. This suggests that we are talking about a substantial difference among groups.
We will have more to say about such measures in the next chapter, but in that case we will
focus on pairsof means rather than the complete set of means. In this chapter I have fo-
cused on measures that relate to differences among many groups simultaneously. I think
that you will find in your research that it is specific group differences that are most impor-
tant, and in that situation d-family measures have a distinct advantage.
11.12 Power
Estimating power in the analysis of variance is a straightforward extension of the power
analysis for t, although the notation is different, as are the tables. Just as with t, we will de-
fine a statistic, phi prime ( ), based on the expected differences among the , then derive
a second statistic, phi ( ), which is a function of nand , and finally calculate power from
tables of the noncentral Fdistribution. A more complete treatment of power can be found
in Cohen (1988) and Koele (1982).
We already know (see p. 323) that
If is true, and the ratio will be distributed as the usual
(central) Fdistribution. The mean of this distribution is dferror>(dferror 2 2), which is very
H (^0) at^2 j = 0 F=MStreat>MSerror
E(MSreat)
E(MSerror)
=
s^2 e 1 nat^2 j>(k 2 1)
s^2 e
f f¿
f¿ mj
... ...
=
B
29.5648
4(9.6730)
= 2 .7641=.874
d=
B
a
1
k 21
baa
mj2m
s
b
2
=
B
a
1
k 21
b
©(mj2m)^2
s^2
=
B
1
k 21
a
©(Xj 2 X..)^2
s^2
b
348 Chapter 11 Simple Analysis of Variance
noncentral F
distribution