Statistical Methods for Psychology

(Michael S) #1

15.12 The “Importance” of Individual Variables


When an investigator derives a regression equation to predict some criterion on the basis of
several variables, it is logical for her to want to know which of the variables is most impor-
tant in predicting Y. Unfortunately, that question has no simple answer, except in the un-
usual case in which the predictors are mutually independent. As we have seen, (or ) is
sometimes taken as a measure of importance. This is done on the grounds that can be
interpreted as the uniquecontribution of each variable to the prediction of Y. Thus, has
some variance in common with Ythat is not shared by any of the other variables, and this
variance is represented by. The difficulty with this measure is that it has nothing to say
about the portion of the variance of Ythat doesshare with the other variables but that is
in some sense part of the contribution of to the prediction of Y. Moreover, what does it
mean to speak of the independent contribution of variables that are not independent?
Darlington (1990) has argued against using as a measure of importance. does rep-
resent the difference, in standard deviation units, between two cases that are equal on all
other predictor variables but differ by one unit on. However, this does not take into ac-
count the fact that when variables are highly correlated such cases will rarely, if ever, exist.
Basing a measure of importance on the bweights has the further serious drawback that
when variables are highly correlated (a condition known as multicollinearity), the values of
bare very unstable from sample to sample, although may change very little. Given two
sets of data, it would not be particularly unusual to find

in one case and

in the other, with nearly equal values of associated with the two equations. If we now
seek a measure of the contribution of each of the predictors in accounting for Y(as op-
posed to using regression to simply predict Yfor a given set of data), we could come to
quite different conclusions for the two data sets. Darlington (1968) presents an interest-
ing discussion of this issue and concludes that has only limited utility as a measure of
“importance.” An even stronger stand is taken by Cooley and Lohnes (1971), who point
out that our estimate of bultimately relies on our estimates of the elements of the inter-
correlation matrix. Because this matrix contains p 1 p(p 2 1) 2 intercorrelations that are
all subject to sampling error, Cooley and Lohnes suggested that we must be exceedingly
careful about attaching practical significance to the regression coefficients.
It is easy to illustrate the problem we have here. In earlier editions of this book I used an
example in which 50 university courses were rated on several variables and a regression
equation was computed to predict the overall rating from the ratings of other variables, such
as how good a teach the instructor was, how fair the exams were, and so on. The regression
equation that was derived from that data set, using standardized regression coefficients, was

The multiple R^2 was .755. I then took a second set of 50 courses sampled from the same
source as the original data in Table 15.1. In this case, was more or less the same as it
had been for the first example ( 5 .710), but the regression equation looked quite differ-
ent. In terms of standardized variables the equation was,

If you compare these two equations, it is clear that there are substantial differences in some
of the values of .bi

ZYN =0.371 Teach 1 0.113 Exam 1 0.567 Knowledge 2 0.27 Grade 1 0.184 Enroll

R^2


R^2


ZYN =0.662 Teach 1 0.106 Exam 1 0.325 Knowledge 2 0.105 Grade 1 0.124 Enroll

>


bi

R^2


YN =0.25Z 11 0.50Z 2


YN =0.50Z 11 0.25Z 2


R^2


Xj

bj bj

X 1


X 1


b^21

X 1


b^2

bj b^2 j

15.12 The “Importance” of Individual Variables 551
Free download pdf