Statistical Methods for Psychology

(Michael S) #1
The two entries on the right for Department B are 353 3 8/585 5 4.827 and 207 3
17/585 5 6.015. The odds for the remaining rows are computed in a similar manner. The
overall odds ratio is just the ratio of the sums of those two columns. Thus
OR 5 141.957/137.108 5 1.03.
The odds ratio tells us that the odds of being admitted if you are a male are 1.03 times the
odds of being admitted if you are a female, which means that the odds are almost identical.
Underlying the Mantel-Haenszel statistic is the assumption that the odds ratios are com-
parable across all strata—in this case all departments. But Department A is clearly an out-
lier. In that department the odds ratio for men to women is 0.35, while all of the other odds
ratios are near 1.0, ranging from 0.80 to 1.22. The inclusion of that department would vio-
late one of the assumptions of the test. In this particular case, where we are checking for dis-
crimination against women, it does not distort the final result to leave that department out.
Department A actually admitted significantly more women than men. If it had been the other
way around I would have serious qualms about looking only at the other five departments.

r-Family: Phi and Cramér’s V


The measures that we have discussed above are sometimes called d-family measures
because they focus on comparing differences between conditions—either by calculating the
difference directly or by using ratios of risks or odds. An older, and more traditional, set of
measures, sometimes called “measures of association” look at the correlation between two
variables. Unfortunately we won’t come to correlation until Chapter 9, but I would expect
that you already know enough about correlation coefficients to understand what follows.
There are a great many measures of association, and I have no intention of discussing
most of them. One of the nicest discussions of these can be found in Nie, Hull, Jenkins,
Steinbrenner, and Bent (1970). (If your instructor is very old—like me—he or she proba-
bly remembers it fondly as the old “maroon SPSS manual.” It is such a classic that it is very
likely to be available in your university library or through interlibrary loan.)

Phi (f) and Cramér’s V


In the case of 2 3 2 tables, a correlation coefficient that we will consider in Chapter 10
serves as a good measure of association. This coefficient is called phi (f),and it represents
the correlation between two variables, each of which is a dichotomy. (A dichotomy is a
variable that takes on one of two distinct values.) If we coded Aspirin as 1 or 2, for Yes and
No, and coded Heart Attack as 1 for Yes and 2 for No, and then correlated the two variables
(see Chapters 9 and 10), the result would be phi. (It does not even matter what two num-
bers we use as values for coding, so long as one condition always gets one value and the
other always gets a different [but consistent] value.)
An easier way to calculate ffor these data is by the relation

f=
B

x^2
N

164 Chapter 6 Categorical Data and Chi-Square


phi (f)


Department Data n 11 kn 22 k/n..k n 12 kn 21 k/n..k

F 22 351 9.768 11.798
24 317

Sum 141.957 137.108
Free download pdf