276 The Basics of financial economeTrics
model is the expression shown in equation (3.1) in Chapter 3 reproduced
here as equation (13.4):
yx=β 01 +β 1 ++... β+kkx ε (13.4)
Then the difference b−β, where ββ=() 01 ,,...ββ,'k is the vector of true
parameters, is a normally distributed random variable.
It can be demonstrated that Eb()− =β 0 ; that is, the expectation of
the estimated regression coefficients is equal to the true parameters and
the variance-covariance of the difference b−β is cov'()bX− =()X
−
βσ
(^12)
where σ^2 is the variance of the data. If the regression variables are normally
distributed, the above can be concisely stated by saying that the estimator b
is distributed as a normal multivariate variable with mean β and covariance
matrix ()XX'
− (^12)
σ.
The variance of an estimator is a measure of its accuracy. The Gauss-
Markov theorem states that the OLS regression estimator is the best linear
unbiased estimator (generally referred to by the acronym BLUE) in the
sense that it has the lowest possible variance among all linear unbiased
estimators.
Weighted least Squares Method
The OLS method can be generalized in different ways. The first important
generalization is when all variables, X and y, are assumed to be random
variables. In this case, the computation of the regression coefficients from
the observed data remains unchanged. In fact, with OLS we compute the
coefficients b given the data. Hence, we estimate b with the same formula as
in equation (13.4):
bX=()XX Xy
−
''
1
(13.5)
This formula is numerically identical to that of equation (13.4) but the data
X are now a realization of the random variables X.
Assuming that residuals and regressors are independent, the OLS esti-
mator given by equation (13.5) is still an unbiased estimator even when
regressors are stochastic. However, the covariance of the estimator is no lon-
ger ()XX'
− (^12)
σ. This expression represents the regressors’ covariance matrix
under the assumption that regressors X are fixed. However, if regressors are
stochastic, ()XX'
− (^12)
σ is the conditional variance given X. To obtain the final
covariance we have to multiply ()XX'
− (^12)
σ by the distribution of the regres-
sors. In general, even if data are normally distributed, this is not a normal
distribution.