Statistical Analysis for Education and Psychology Researchers

(Jeff_L) #1

experimental units are usually individuals. An ANOVA analysis enables causal
inferences to be made in experimental designs by partitioning total variation between
individuals (experimental units) into separate components, each component representing
a different source of variation or treatment effect. It is then possible to identify the
relative influence of the different treatments (independent variables) and to compare the
between-treatments variation (the observed differences) to differences attributable to
chance. Chance differences are any differences among individuals within the various
treatment groups due to uncontrolled or unknown variables. This source of variation is
conveniently termed error variance. The proportion of total variation attributable to
treatments when compared with the proportion attributable to error forms the basis of the
F-ratio or F-statistic. If the observed treatment effects, that is differences between
treatments as summarised by treatment means, account for about the same proportion of
variability in the response variable as the chance differences, then it is reasonable to
assume that the observed treatment differences are probably just random fluctuations and
conclude that there are no differences between treatments (the independent variables). If
the proportion of variance accounted for by the between-treatment conditions is large
compared with the error variance then this indicates a significant treatment effect which
may warrant further investigation to find out which treatment(s) have a significant
influence on the response variable.
Consider the vocabulary teaching methods experiment introduced in Chapter 1
(Example 2). The experimental units, in this case pupils, were randomly assigned to one
of three treatment groups. One condition was silent reading, another was storytelling and
the third was storytelling enhanced by pictures. After the intervention programme pupils’
vocabulary acquisition was assessed and the mean scores for each of the three treatment
groups were compared.
Analysis of variance methods can also be used with observational data which would
include both comparative and survey designs. In comparative designs the impact on a
metric response variable of categorical independent variables, representing naturally
occurring groups, is assessed. The researcher is looking for the effect of membership of a
particular group (independent variable) on the response variable. Variables used to define
groups are called factors, for example, sex, and the various treatments within a factor are
called levels, for example, male or female. The researcher looks to see whether there are
differences between the means of the treatment groups. This term is used because of the
original development of the ANOVA technique for analysis of true experimental designs.
The investigator is really examining differences among the means of the levels of a
factor, such as difference between mean scores for males and females. Comparative
designs differ from true experimental designs because the levels of the independent
variables (group membership—male, female) are not randomly assigned (they have
already occurred or are natural categories) before their effects on the response variable
are observed. Survey designs are similar to comparative designs except that mean
differences for population subclasses are examined. The subclasses are described by the
researcher, for example, a particular age band of 5–7-year-olds or a particular type of
school. Subjects may even be subclassified by their responses after data has been
collected for example—cognitive strategy users and non-strategy users might only be
defined by the researchers after initial data analysis.


Inferences involving continuous data 309
Free download pdf