Statistical Methods for Psychology

(Michael S) #1
an effect equivalent to a correlation of .32 between presence or absence of psychotherapy
and presence or absence of improvement, by whatever measure. A reviewer subsequently
squared this correlation ( 5 .1024) and deplored the fact that psychotherapy accounted for
only 10% of the variability in outcome. Rosenthal and Rubin were not impressed by the re-
viewer’s perspicacity. They pointed out that if we took 100 people in a control group and
100 people in a treatment group, and dichotomized them as improved or not improved, a
correlation of 5 .32 would correspond to 5 20.48. This can be seen by computing

The interesting fact is that such a would result from a contingency table in which 66
of the 100 subjects in the treatment group improved whereas only 34 of the 100 subjects in
the control group improved. (You can easily demonstrate this for yourself by computing
on such a table.) That is a dramatic difference in improvement rates.
But I have two more examples. Rosenthal (1990) pointed to a well-known study of
(male) physicians who took a daily dose of either aspirin or a placebo to reduce the inci-
dence of heart attacks. (We considered this study briefly in earlier chapters, but for a differ-
ent purpose.) This study was terminated early because the review panel considered the
results so clearly in favor of the aspirin group that it would have been unethical to continue
to give the control group a placebo. But, said Rosenthal, what was the correlation between
aspirin and heart attacks that was so dramatic as to cut short such a study? Would you be-
lieve 5 .034 ( 5 .001)?
I include Rosenthal’s work to make the point that one does not require large values of
(or ) to have an important effect. Small values in certain cases can be quite impres-
sive. For further examples, see Rosenthal (1990).
To return to what appears to be a small effect in Gibson’s sexual abuse data, we will
take an approach adopted in Chapter 6 with odds ratios. In Gibson’s data 50 out of 318
children who received no instruction were subsequently abused, which makes the odds of
abuse for this group to be 50/268 5 0.187. On the other hand 43 out of 500 children who
received training were subsequently abused, for odds of 43/457 5 0.094. This gives us an
odds ratio (the ratio of the two calculated odds) of 0.187/0.094 5 1.98. A child who does
not receive sexual abuse training in school is nearly twice as likely to be subsequently
abused as one who does. That looks quite a bit different from a squared correlation of only
.012, which illustrates why we must be careful in the statistic we select. (The relative risk
in this case is RR 5 .157/.086 5 1.83.)
At this point perhaps you are thoroughly confused. I began by showing that you can cal-
culate a correlation between two dichotomous variables. I then showed that this correlation
could either be calculated as a Pearson correlation coefficient, or it could be derived directly
from a chi-square test on the corresponding contingency table, because there is a nice rela-
tionship between and. I argued that or^2 can be used to provide an r-family effect
size measure (a measure of variation accounted for) of the effectiveness of the independent
variable. But then I went a step further and said that when you calculate^2 you may be sur-
prised by how small it is. In that context, I pointed to the work of Rosenthal and Rubin, and
to Gibson’s data, showing in two different ways that accounting for only small amounts of
the variation can still be impressive and important. I am mixing different kinds of measures
of “importance” (statistical significance, percentage of accountable variation, effect sizes
[d], and odds ratios), and, while that may be confusing, it is the nature of the problem.

f

f x^2 f f

r^2 f^2

f f^2

x^2

x^2

x^2 =20.48

.1024=x^2 > 200

f^2 =x^2 >N

f= 3 x^2 >N

f x^2

r^2

302 Chapter 10 Alternative Correlational Techniques

Free download pdf