Statistical Methods for Psychology

(Michael S) #1
when you go to other sources to look up nested (or random) designs you will often find the
two discussed together. A final point to keep in mind is that in all of the between-subjects
designs in this book subjects are nested within other factors and are considered to be a ran-
dom factor. All of our Fstatistics are computed taking that into account.

13.9 Measures of Association and Effect Size


We can look at the magnitude of an effect in two different ways, just as we did with the
one-way analysis. We can either calculate an r-family measure, such as , or we can cal-
culate a d-family measure such as d. Normally when we are examining an omnibus F, we
use an r-family measure. However, when we are looking at a contrast between means it is
usually more meaningful to calculate an effect size estimate (d). We have seen both types
of measures in previous chapters.

r-Family Measures


As with the one-way design, it is possible to calculate the magnitude of effect associated
with each independent variable. The easiest, but also the most biased, way to do this is to
calculate. Here we would simply take the relevant sum of squares and divide by. Thus,
the magnitude of effect for variable Ais 5 and for variable Bis
5 , whereas the magnitude of effect for the interaction is 5.
There are two difficulties with the measure that we have just computed. In the first
place is a biased estimate of the true magnitude of effect in the population. To put this
somewhat differently, is a very good descriptive statistic, but a poor inferential statistic.
Second, , as we calculated it here, may not measure what we want to measure. We will
speak about that shortly when we discuss partial.
Although is also biased, the bias is much less than for. In addition, the statistical
theory underlying allows us to differentiate between fixed, random, and mixed models
and to act accordingly.
To develop for two-way and higher-order designs, we begin with the set of expected
mean squares given in Table 13.8, derive estimates of , and then form
ratios of each of these components relative to the total variance. Rather than derive the for-
mulae for calculating for the three different models, as I have done in previous editions
of this book, I will present the results in a simple table. I strongly suspect that no student
remembered the derivation five minutes after he or she read it, and that many students were
so numb by the end of the derivation that they missed the final result.
For a factorial analysis of variance the basic formula to estimate remains the same
whether we are looking at fixed or random variables. The only difference is in how we cal-
culate the components of that formula. We will start by letting refer to the estimate
of the variance of the independent variable we care about at the moment, such as A, B, or
AB, and by letting refer to the sum of all sources of variance. (If an effect is fixed,
replace by ) Then if we know the value of these terms we can estimate as

For the main effect of A, for example, this becomes

All we have to know is how to calculate the variance components (s^2 effect).

v^2 a=

sN^2 a
sN^2 total

=


sN^2 a
sN^2 a1sN^2 b1sN^2 ab1sN^2 e

vN^2 effect=

sN^2 effect
sN^2 total

s^2 u^2. v^2 effect

sN^2 total

sN^2 effect

v^2

v^2

s^2 a, s^2 b, s^2 ab, and s^2 e

v^2

v^2

v^2 h^2

h^2

h^2

h^2

h^2

hb^2 SSB>SStotal hab^2 SSAB>SStotal

ha^2 SSA>SStotal

h^2 SStotal

h^2

438 Chapter 13 Factorial Analysis of Variance

Free download pdf