Since clients generally come in whole lots, we will round off to 72. Thus, if the experi-
menter wants to have an 80% chance of rejecting when d 5 0.33 (i.e., when 5 105),
she will have to use the IQs for 72 randomly selected clients. Although this may be more
clients than she can test easily, the only alternative is to settle for a lower level of power.
You might wonder why we selected power 5 .80; with this degree of power, we still
run a 20% chance of making a Type II error. The answer lies in the notion of practicality.
Suppose, for example, that we had wanted power 5 .95. A few simple calculations will
show that this would require a sample of n 5 119. For power 5 .99, you would need
approximately 162 subjects. These may well be unreasonable sample sizes for this particu-
lar experimental situation, or for the resources of the experimenter. Remember that in-
creases in power are generally bought by increases in nand, at high levels of power, the
cost can be very high. If you are taking data from data tapes supplied by the Bureau of the
Census, that is quite different from studying teenage college graduates. A value of power
5 .80 makes a Type II error four times as likely as a Type I error, which some would take
as a reasonable reflection of their relative importance.
Noncentrality Parameters
Our statistic dis what most textbooks refer to as a noncentrality parameter.The concept
is relatively simple, and well worth considering. First, we know that
is distributed around zero regardless of the truth or falsity of any null hypothesis, as long
as mis the true meanof the distribution from which the Xs were sampled. If states that
(some specific value of m) and if is true, then
will also be distributed around zero. If is false and , however, then
will not be distributed around zero because in subtracting , we have been subtracting the
wrong population mean. In fact, the distribution will be centered at the point
This shift in the mean of the distribution from zero to dis referred to as the degree of
noncentrality, and dis the noncentrality parameter. (What is dwhen ?) The non-
centrality parameter is just one way of expressing how wrong the null hypothesis is.
The question of power becomes the question of how likely we are to find a value of the
noncentral (shifted) distribution that is greater than the critical value that t would have un-
der. In other words, even though larger-than-normal values of t are to be expected be-
cause is false, we will occasionally obtain small values by chance. The percentage of
these values that happen to lie between is b, the probability of a Type II error. As we
know, we can convert from bto power; power 51 2b.
Cohen’s contribution can be seen as splitting the noncentrality parameter (d) into two
parts—sample size and effect size. One part (d) depends solely on parameters of the popu-
lations, whereas the other depends on sample size. Thus, Cohen has separated parametric
6 t.025
H 0
H 0
m 1 =m 0
d=
m 1 2m 0
s> 1 n
m 0
t=
X2m 0
s> 1 n
H 0 mZm 0
t=
X2m 0
s> 1 n
m=m 0 H 0
H 0
t=
X2m
s> 1 n
H 0 m 1
232 Chapter 8 Power
noncentrality
parameter