true, and we still want to know which model is better. One way of doing that is to compare
the sections labeled “Information Criteria” for each analysis. These are reproduced below
for the two models.
Section 14.12 Mixed Models for Repeated-Measures Designs 505
Compound Symmetry Autoregressive (1)
Information Criteriaa Information Criteriaa
2 Restricted Log Likelihood 905.398 2 Restricted Log Likelihood 895.066
Akaike’s Information Akaike’s Information
Criterion (AIC) 909.398 Criterion (AIC) 899.066
Hurvich and Tsai’s Criterion Hurvich and Tsai’s Criterion
(AICC) 909. 555 (AICC) 899.224
Bozdogan’s Criterion 916.136 Bozdogan’s Criterion (CAIC) 905.805
Schwarz’s Bayesian Schwarz’s Bayesian
Criterion (BIC) 914.136 Criterion (BIC) 903.805
The information criteria The information criteria are
are displayed in displayed in smaller-is-
smaller-is-better forms. better forms.
a Dependent Variable: dv aDependent Variable: dv
2 2
A good way to compare models is to compare either the Akaike’s Information Crite-
rion (AIC) or the Bayesian Information Criterion (BIC). In general a model with a smaller
value is better. For our examples the two AIC criteria are 909.398 and 899.066. It would
appear that the Autoregressive (1) model is to be preferred, which is in line with what our
eyes told us about the covariance structures. (If we had rerun the analysis using an unstruc-
tured covariance matrix (COVTYPE(UN)), AIC would be 903.691 and BIC would be
927.385, so we would still choose the autoregressive model.)
Mixed models have a great deal to offer in terms of fitting data to models and allow us
to compare underlying models to best interpret our data. They also can be very valuable in
the presence of missing data. However, they are more difficult to work with and the soft-
ware, while certainly improving, is far from intuitive in some cases. Nevertheless, I think
that this is the direction that more and more analyses will take over the next decade, and it
is important to understand them.
Papers by Overall, Tonidandel, and others illustrate the problems with mixed models.
The major problem is the fact that it is very difficult to know how to correctly specify your
model, and different specifications can lead to different results and sometimes rather low
power. An excellent paper in this regard is by Overall and Shivakumar (1997) and another
by Overall and Tonidandel (2007). I recommend that you look at those papers when con-
sidering the use of mixed models, although those authors used SAS Proc Mixed for their
analyses and it is not entirely clear how those models relate to models you would have us-
ing SPSS. What seems to be critically important is the case where missing data depend on
the participant’s initial response at baseline and attempts to use this measure as a covariate.