Statistical Analysis for Education and Psychology Researchers

(Jeff_L) #1

to describe a trend is to use polynomials (non-linear relationships) of varying degree in a
regression analysis. The treatment levels would correspond to the X variable and the
treatment group means would be the Y variable in a regression of Y on X. A further
design implication is that the various treatment levels should correspond to a sensible
measurement scale, i.e., equal steps along an ordered scale. We would pick equal time
intervals in this example.
A linear (first degree) model would be fitted first to see whether there was any linear
relationship. Polynomial terms would then be fitted e.g., quadratic (second degree) and
cubic (third degree). F-tests can then be performed to detect the significance of linear and
higher order trends. The reader is referred to Chapter 3 in Winer (1962) for a worked
example.


8.9 Two-way ANOVA 2×2 Factorial (unrelated) Two factors with two
levels for each factor

When to Use

When a survey or experimental design has two independent variables (factors) and every
level of one factor is paired with every level of the other factor this is called a factorial
design and can be analysed by a 2×2 Factorial ANOVA. In this design different subjects
appear in each combination of different levels of the factors, hence the term unrelated. In
a 2×2 design there are two factors each with two levels giving four cells. Results can
therefore be analysed by examining main effects of each factor, (ignoring the effect of the
other factor) and by looking for interaction effects. Often a factorial analysis is
performed because an investigator believes there will be a significant interaction between
the two independent variables. In an unrelated 2×2 factorial design there is only one score
per subject (experimental unit) and if the design is balanced there will be an equal
number of subjects (and scores) in each of the four cells of the design.
In this analysis the total variance between subjects is partitioned into three separate
components, a sums of squares for each factor, SSF1, SSF2, and an interaction sums of
squares SSF1×F2. Any differences between subjects within each combination of treatment
conditions is counted as a source of error variance. The interaction sums of squares can
be calculated, using a pocket calculator, by calculating the sums of squares for all four
cells in the design (based on the totals of all four cells) and then subtracting the sums of
squares for each single factor (based on the totals of only two cells). A computer analysis
evaluates the interaction term by first fitting a full statistical model and then comparing
the estimated values with a reduced model with the interaction term deleted. One final
problem that researchers occasionally encounter with factorial designs is the choice of an
appropriate error term for the denominator of the F-test when a random effects or mixed
effects (fixed and random factor) model is fitted. Choice of error terms and how to
specify these in your analysis is described in the section on computer analysis.


Inferences involving continuous data 333
Free download pdf