Statistical Methods for Psychology

(Michael S) #1
must be measured more than once. A good example was sent to me by Stacey Freedenthal at
the University of Denver, though the data that I will use are fictitious and should not be taken
to represent her results. Dr Freedenthal was interested in studying help-seeking behavior in
children. She took a class of 70 children and recorded the incidence of help-seeking before
and after an intervention that was designed to increase student’s help-seeking behavior. She
measured help-seeking in the fall, introduced an intervention around Christmas time, and
then measured help-seeking again, for these same children, in the spring.
Because we are measuring each child twice, we need to make sure that the dependence
between measures does not influence our results. One way to do this is to focus on how
each child changed over the course of the year. To do so it is necessary to identify the
behavior separately for each child so that we know whether each specific child sought help
in the fall and/or in the spring. We can then focus on the change and not on the multiple
measurements per child. To see why independence is important, consider an extreme case.
If exactly the same children who sought help in the fall also sought it in the spring, and
none of the other children did, then the change in the percentage of help-seeking would be
0 and the standard error (over replications of the experiment) would also be 0. But if
whether or not a child sought help in the spring was largely independent of whether he or
she sought help in fall, the difference in the two percentages might still be close to zero,
but the standard error would be relatively large. In other words the standard error of change
scores varies as a function of how dependent the scores are.
Suppose that we ran this experiment and obtained the following not so extreme data.
Notice that Table 6.6 looks very much like a contingency table, but with a difference. This
table basically shows how children changed or didn’t change as a result of the intervention.
Notice that two of the cells are shown in bold, and these are really the only cells that we care
about. It is not surprising that some children would show a change in their behavior from fall
to spring. And if the intervention had no effect (in other words if the null hypothesis is true),
we would expect about as many to change from “Yes” to “No” as from “No” to Yes.” How-
ever if the intervention were effective we would expect many more children to move from
“No” to “Yes” than to move in the other direction. That is what we will test.
The test that we will use is often called McNemar’s test (McNemar, 1947) and reduces
to a simple one-way goodness of fit chi-square where the data are those from the two off-
diagonal cells and the expected frequencies are each half of the number of children chang-
ing. This is shown in Table 6.7.^6

154 Chapter 6 Categorical Data and Chi-Square


(^6) This is exactly equivalent to the common ztest on the difference in independent proportions where we are ask-
ing if a significantly greater proportion of people changed in one direction than in the other direction.
Table 6.7 Results of experiment on help-seeking
behavior in children
No Yes Yes No Total
Observed 12 4 16
Expected 8.0 8.0 16
: :
Table 6.6 Help-seeking behavior in fall and spring
Spring
Yes No Total
Fall Yes 38 4 42
No 12 18 30
Total 50 22 72

Free download pdf