Building and Testing a Multiple Linear Regression Model 97
correlation using the residual pairs et and et− 1 as the observations. The
formula is
ρauto
tt
t
n
t
t
n
ee
e
=
−
=
=
∑
∑
1
2
2
1
(4.24)
where ρauto means the estimated autocorrelation and et is the computed
residual or error term for the tth observation.
A positive autocorrelation means that if a residual t is positive (nega-
tive), then the residual that follows, t + 1, tends to be positive (negative).
Positive autocorrelation is said to exhibit persistence. A negative autocor-
relation means that a positive (negative) residual t tends to be followed by a
negative (positive) residual t + 1.
The presence of significant autocorrelation in a time series means that, in a
probabilistic sense, the series is predictable because future values are correlated
with current and past values. From an estimation perspective, the existence of
autocorrelation complicates hypothesis testing of the regression coefficients.
This is because although the regression coefficient estimates are unbiased, they
are not best linear unbiased estimates. Hence, the variances may be signifi-
cantly underestimated and the resulting hypothesis test questionable.
Detecting Autocorrelation How do we detect the autocorrelation of residu-
als? Suppose that we believe that there is a reasonable linear relationship
between two variables, for instance stock returns and some fundamental
variable. We then perform a linear regression between the two variables and
estimate regression parameters using the OLS method. After estimating the
regression parameters, we can compute the sequence of residuals. At this
point, we can apply statistical tests. There are several tests for autocorrela-
tion of residuals that can be used. Two such tests are the Durbin-Watson test
and the Dickey-Fuller test. We discuss only the first below.
The most popular test is the Durbin-Watson test, or more specifically,
the Durbin-Watson d-statistic, computed as
d
ee
e
tt
t
n
t
t
= n
− −
=
=
∑
∑
() 12
2
2
1