Statistical Methods for Psychology

(Michael S) #1
Given , we can now solve for t:

on 198 df. Since , we would fail to reject and would therefore con-
clude that we have no reason to doubt that life expectancy decreases as a function of smok-
ing at the same rate for males as for females.
It is worth noting that although is equivalent to , it does not fol-
low that is equivalent to. If you think about it for a
moment, it should be apparent that two scatter diagrams could have the same regression
line ( ) but different degrees of scatter around that line, (hence ). The re-
verse also holds—two different regression lines could fit their respective sets of data
equally well.

Testing the Difference Between Two Independent rs


When we test the difference between two independent rs, a minor difficulty arises. When
, the sampling distribution of ris not approximately normal (it becomes more and
more skewed as ), and its standard error is not easily estimated. The same
holds for the difference. This raises an obvious problem, because, as you can
imagine, we will need to know the standard error of a difference between correlations if
we are to create a ttest on that difference. Fortunately, the solution was provided by
R. A. Fisher.
Fisher (1921) showed that if we transform rto

then is approximately normally distributed around (the transformed value of r) with
standard error

(Fisher labeled his statistic “z,” but “ ” is often used to avoid confusion with the standard
normal deviate.) Because we know the standard error, we can now test the null hypothesis
that by converting each rto and solving for

Note that our test statistic is zrather than t, since our standard error does not rely on statis-
tics computed from the sample (other than N) and is therefore a parameter.
Appendix tabulates the values of for different values of r, which eliminates the
need to solve the equation for.
To take a simple example, assume that for a sample of 53 males, the correlation be-
tween number of packs of cigarettes smoked per day and life expectancy was .50. For a
sample of 43 females, the correlation was .40. (These are unrealistically high values for r,
but they better illustrate the effects of the transformation.) The question of interest is, Are
these two coefficients significantly different, or are the differences in line with what we
would expect when sampling from the same bivariate population of X, Ypairs?

r¿

r¿ r¿

z=

r¿ 12 r¿ 1

B


1


N 123


1


1


N 223


r 1 2r 2 = 0 r¿

r¿

sr¿=

1


2 N^23


r¿ r¿

r¿=(0.5) loge`

11 r
12 r

`


r 12 r 2

rQ 6 1.00

rZ 0

b 1 *=b 2 * r 1 Zr 2

H 0 : b 1 * 2 b* 2 = 0 H 0 : r 1 2r 2 = 0

H 0 : b*= 0 H 0 : r= 0

t0.025(198)= 6 1.97 H 0

t=

b 12 b 2
sb 12 b 2

=


(-0.40) 2 (-0.20)


0.192


=-1.04


sb 12 b 2

Section 9.11 Hypothesis Testing 275
Free download pdf