Statistical Methods for Psychology

(Michael S) #1
difference between those states on any other variable. If we now solved for our regression
coefficients using the standardized variables, we would obtain

where Zis used to denote standardized variables. In this case, the regression coefficients are
called standardized regression coefficients,labeled “Beta” by SPSS and denoted. Thus

When variables have been standardized, the intercept ( ) is equal to 0 and is not shown.
From the preceding values of we can conclude that a one-unit difference (i.e., a dif-
ference of one standard deviation) between states in Z 1 (the standardized Expend variable)
with LogPctSAT held constant will be associated with a difference in of .203 units and
therefore a difference in of .203 standard deviations. A one unit differences in Z 2 will be
associated with differences in of 2 1.040. It begins to look as if LogPctSAT may be a more
important predictor than Expend. Although the relative magnitudes of the are not neces-
sarily the best indicators of “importance,” they have a simple interpretation, are printed by
most regression computer programs, and generally give at least a rough estimate of the rela-
tive contributions of the variables in the equation. Standardized regression coefficients can
be obtained from nearly all statistical software that will run a regression analysis.

15.3 Standard Errors and Tests of Regression Coefficients


Once we have a regression coefficient, standardized or not, we normally test it for statisti-
cal significance. If the coefficient relating Expend to SAT is not statistically significantly
different from 0, then Expend will serve no useful purpose in the prediction of SAT. As you
might suspect, it doesn’t matter whether we test the raw score regression coefficients ( )
or the standardized coefficients ( ). They are simply linear transformations of one another,
and we would obtain the same test statistic in either case.
To test a regression coefficient (or most other statistics for that matter), we need to
know the standard error of that statistic. The standard errors for the s are given in Exhibit
15.1 and labeled “Std. Error.” For example, the standard error of , the intercept, is
16.700, and the standard error for b 1 is 3.264. As with other standard errors, the standard
error of the regression coefficient refers to the variability of the statistic over repeated sam-
pling. Suppose we repeated the study many times on different independent samples of stu-
dents. (I know that we can’t do that, but we can at least pretend that we can.) Each
replication would be expected to give us a slightly different value of b 1 , although each of
these would be an unbiased estimate of the true coefficient in the population, which we will
denote as b 1 *. The many b 1 s would be normally distributed about b 1 *with a standard devia-
tion estimated to be 3.264, the standard error of b 1.
We can use these standard errors to form a t test on the regression coefficients. Specifically,

on N 2 p 2 1 degrees of freedom.^3

t=

bj 2 b*j
sbj

b 0

bi

bi

bi

bi

YN


YN


YNZ


bi

b 0

b 2 = 2 1.040

b 1 =0.203

bi

YNz=0.203ZExpend 2 1.040ZLogPctSAT

15.3 Standard Errors and Tests of Regression Coefficients 529

(^3) A number of authors (e.g., Draper & Smith, 1981; Huberty, 1989) have pointed out that in general this is not
exactly distributed as Student’s t. However, it is generally treated as if it were, but one should not take the associ-
ated probability too literally.
standardized
regression
coefficients

Free download pdf