the design of the experiment, which deliberately placed nearly equal numbers of observa-
tions in the two levels of Fault.
The interaction itself we have already tested at the beginning of the chapter. There we
found that 5 37.3503, and we concluded that Fault and Verdict were not independent.
We have run each of these tests separately. Now let’s see how we can derive them from
the log-linear models that we have already created. (In higher-order designs we can still
test the effect of single variables [what the analysis of variance labels as main effects], but
not interaction, in the way we just did. However, the model-comparison approach to be
adopted generalizes to interaction effects as well.)
We have found that the simplest model [ ] produces a 5 109.5889. (In SAS
the likelihood ratio is denoted as “deviance.”) When we added to this model, dropped
to 37.3960, reflecting the variation in cell frequencies attributable to Verdict. This drop
(109.5889 – 37.3960 5 72.1929) is the for Verdict, and its degrees of freedom equal
the difference between the degrees of freedom in the two models (3 22 5 1). This is ex-
actly the same value we obtained in Table 17.5 when we compared the marginal frequen-
cies. In other words, adjusting for yields the same result as basing our results on
the marginals.
By a similar line of reasoning, we can note that taking Fault into account and going from
to reduces from 109.5889 to 109.5442, for a decrease of
0.0447. This is the same as the marginal on Fault that we obtained in Table 17.6.
Finally, we should note that when we go from a model of to
, drops from 37.3503 to 0.00. This drop (37.3503) is
the same as the for the interaction based on marginal frequencies. This equality will
not generally hold for more complex designs unless we are looking at the highest-order
interaction.
One other feature of log-linear models should be mentioned. The minimal model
[ ] produced 5 109.5889. The individual components of the saturated model
had values of 72.1929, 0.0447, and 37.3503. These sum to 109.5889. In other words,
these likelihood ratio values are additive. This would not have been the case had we
computed the Pearson chi-square statistic instead, which is one very good reason to
concentrate on likelihood ratio.
At this point you should have an overview of parameter estimates. It would be smart to
go back to the beginning of Section 17.2 and reread that section. (I would if I were you,
and I wrote the chapter.)
x^2
x^2
x^2
ln(Fij)=l x^2
x^2
ln(Fij)=l1lVi 1lFj 1lVFij x^2
ln(Fij)=l1lVi1lFj
x^2
ln(Fij)=l ln(Fij)=l1lFj x^2
l1lVi l
x^2
x^2 lV x^2
ln(Fij)=l x^2
x^2
640 Chapter 17 Log-Linear Analysis
Table 17.6 Test on differences due to Fault
Fault
Low High
fij 177 181
Fij 179 179
=0.0447
= 2 a177 ln
177
179
1 181 ln
181
179
b
x^2 = (^2) afij ln a
fij
Fij
b