Anon

(Dana P.) #1

Model Estimation 275


We have also seen that the estimated regression parameters can be repre-
sented in terms of the sample data by expression (3.10) in Chapter 3, which
is reproduced here as equation (13.3):


bX=()XXy



''
1
(13.3)

where =












=











y

y

y

X

xx

xx

1

n^1

k

n nk

1 11 1

1









n = the number of samples
k = the number of regressors

The column of 1s in matrix X corresponds to the constant intercept term.
To illustrate, let’s use the data in Table 13.1. In this case, the matrix
X has two columns, one column of 1s and the data in Table 13.1 The esti-
mated regression coefficients can be computed from equation (13.3). If we
compute this expression (or if we perform the regression estimation with
commercial software, for example using the MATLAB regress function) we
obtain the following estimate for the regression coefficients:


b=







0.8065

0.1317

which are the same coefficients we obtained above as the best fitting
straight line.
Recall from Chapter 3 that the regressors X are assumed to be deter-
ministic while the dependent variable y is a random variable. This means
that in different samples, only the y change while the X remain fixed. Note
that, if there are k regressors plus a constant term, then ()XX''X
− 1
is a
k × n matrix and y is an n × 1 vector in equation (13.3). Hence the vector
b is the product of a k × n matrix and an n × 1 vector. Each component of
b is therefore a linear combination of sample data y given that the X are
fixed. Hence equation (13.3) shows that the OLS estimator of b is a linear
estimator.
The estimator b is a function of sample data y given that the X remain
fixed. The sample data y are random data and therefore b is a random vari-
able. Recall from Chapter 3 that we assume that the regression variables are
normally distributed and therefore b is a linear combination of normal vari-
ables and is therefore a normal variable. Suppose that the true regression

Free download pdf