tested using individual t tests. If you take the square root of the Fs for these two contrasts,
they will equal 6.72 and 2.47, which are precisely the values we obtained for t earlier. This
simply illustrates the fact that t tests are a special case of linear contrasts.
With four contrasts, we have an FWapproaching .20 if all null hypotheses are true,
which seems highly unlikely. This error rate is uncomfortably high, although some experi-
menters would accept it, especially for a priori contrasts. One way of reducing the error
rate would be to run each comparison at a more stringent level of a; for example, a 5.01.
Another alternative would be to use a different a priori procedure, the Bonferroni proce-
dure, which amounts to almost the same thing as the first alternative but is conducted in a
more precise manner. We will consider this procedure after we briefly discuss a special
type of linear contrast, called orthogonal contrasts. Yet a third way to control FWis to run
fewer contrasts. For example, the comparison of M-M with S-S is probably not very im-
portant. Whether complete tolerance develops on the fourth trial or on the sixth or seventh
trial is of no great theoretical interest. By eliminating that contrast, we could reduce the
maximum FWto .15. You should never choose to run contrasts the way you eat peanuts or
climb mountains—just because they are there. In general, if a contrast is not important, do
not run it.
Orthogonal Contrasts
Linear contrasts as they have been defined allow us to test a series of hypotheses about treat-
ment differences. Sometimes contrasts are independent of one another, and sometimes they
are not. For example, knowing that is greater than the average of and tells you
nothing about whether is greater than nor whether is likely to be greater than.
These contrasts are independent. However, knowing that is greater than the average of
and suggests that there is a better than 50:50 chance that is greater than. These two
contrasts are not independent. When members of a set of contrasts are independent of one
another, they are called orthogonal contrasts,and the sums of squares of a complete set of
orthogonal contrasts sum to. (If the contrasts are not orthogonal, they contain over-
lapping amounts of information and do not have this additivity property.) From a calcula-
tional point of view, what sets orthogonal contrasts apart from other types of contrasts we
might choose is the relationship between the coefficients for one contrast and the coeffi-
cients for other contrasts in the set. Other than that, the computations are exactly the same.
Orthogonal Coefficients
Given that sample sizes are equal, for contrasts to be orthogonal the coefficients must meet
the following three criteria:
1.
2.
where aj and bj are the sets of coefficients for different contrasts. Furthermore, for the
to sum to , we need to add a third criterion:
- Number of comparisons 5 number of dffor treatments
The first restriction has been discussed already; it results in the contrast’s being a sum
of squares. The second restriction ensures that the contrasts are independent of (or orthog-
onal to) one another, and thus that we are summing nonoverlapping components. The third
restriction says nothing more than that if you want the parts to sum to the whole, you need
to have all the parts.
SScontrast SStreat
gajbj= 0
gaj= 0
SStreat
X 3 X 1 X 2
X 1 X 2
X 3 X 2 X 4 X 5
X 1 X 2 X 3
12.3 A Priori Comparisons 375
orthogonal
contrasts