Statistical Methods for Psychology

(Michael S) #1
estimate of the population variance ( ), regardless of the truth or falsity of , and is
actually the average of the variances within each group when the sample sizes are
equal:
5 (3.33 1 4.54 1 6.22 1 20.27 1 14.00)/5 5 9.67
However, is not the variance of treatment means but rather is the variance of
those means corrected by nto produce a second estimate of the population variance ( ).

The FStatistic


The last column in Table 11.3, labeled F, is the most important one in terms of testing the
null hypothesis. Fis obtained by dividing by. There is a precise way and a
sloppy way to explain why this ratio makes sense, and we will start with the latter. As said
earlier, is an estimate of the population variance ( ). Moreover is an esti-
mate of the population variance ( ) if is true, but not if it is false. If is true, then
and are both estimating the same thing, and as such they should be approxi-
mately equal. If this is the case, the ratio of one to the other will be approximately 1, give
or take a certain amount for sampling error. Thus, all we have to do is to compute the ratio
and determine whether it is close enough to 1 to indicate support for the null hypothesis.
So much for the informal way of looking at F. A more precise approach starts with the
expected mean squaresfor error and treatments. From earlier in the chapter, we know

We now form the ratio

The only time this ratio would have an expectation of 1 is when 5 0—that is, when
is true and.^5 When .0, the expectation will be greater than 1.
The question that remains, however, is, How large a ratio will we accept without reject-
ing when we use not expectedvalues but obtained mean squares, which are computed
from data and are therefore subject to sampling error? The answer to this question lies in
the fact that we can show that the ratio

is distributed as Fon k 2 1 and k(n 2 1) df. This is the same Fdistribution discussed ear-
lier in conjunction with testing the ratio of two variance estimates (which in fact is what
we are doing here). Note that the degrees of freedom represent the dfassociated with the
numerator and denominator, respectively.
For our example, F 5 9.08. We have 4 dffor the numerator and 45 dffor the denomi-
nator, and can enter the Ftable (Appendix F) with these values. Appendix F, a portion of
which is shown in Table 11.4, gives the critical values for a5.05 and a5.01. For our
particular case we have 4 and 45 dfand, with linear interpolation,. Thus,
if we have chosen to work at a5.05, we would reject and conclude that there are sig-
nificant differences among the treatment means.

H 0


F.05(4,45)=2.58


F=MStreat>MSerror

H 0


m 1 = Á =m 5 u^2 t

u^2 t H 0

E(MStreat)
E(MSerror)

=


s^2 e 1 nu^2 t
s^2 e

E(MStreat)=s^2 e 1 nu^2 t

E(MSerror)=s^2 e

MSerror MStreat

s^2 e H 0 H 0

MSerror s^2 e MStreat

MStreat MSerror

s^2 e

MStreat

MSerror

s^2 e H 0

328 Chapter 11 Simple Analysis of Variance


(^5) As an aside, note that the expected value of Fis not precisely 1 under although
To be exact, under,
For all practical purposes, nothing is sacrificed by thinking of Fas having an expectation of 1 under and
greater than 1 under H 1 the alternative hypothesis).
H 0
E(F)=
dferror
H 0 , dferror 22
E(MStreat)
E(MSerror)=1 if ut^2 =0.
H 0 ,

Free download pdf