Anon

(Dana P.) #1

272 The Basics of financial economeTrics


instead of y = a + bx. We therefore compute the sum S as follows:


()()

()()

()()

()()

()()

= −−× −× + −−× −×

+ −−× −× + −−× −×

+ −−× −× + −−× −×

+ −−× −× + −−×−×

+ −−× −× + −−× −×

S.ab c.ab c

.abc .abc

.abc .abc

.abc .abb

.abc .abc

07 0.80.8 13 1.81.8

11 2.92.9 17 4.24.2

16 5.55.5 14 6.76.7

16 7.47.4 17 8.18.1

21 9.29.2 24 10.6 10. 6

22 22

22 22

22 22

22 22

22 22

As in the previous case, we can find the minimum of the sum S by equat-
ing to zero the partial derivatives of the above equation. However, commer-
cial software generally uses iterative optimizers. For example, if we again
use the function polyfit of MATLAB, specifying a polynomial of second
degree, we obtain the polynomial


yx=+ 0 .. 0038 00889 + 0. 8893 x^2

Figure 13.4 represents the scatterplot of sample data and the optimal poly-
nomial.
As in the previous case, we can evaluate the square root of sum of
squared residuals. We obtain S= 0. 6483. A second-degree polynomial offers
a slightly better approximation to sample data than a straight line. However,
we are now estimating three parameters, a, b, and c, instead of two as in
the case of the straight line. Obtaining a better precision on sample data by
increasing the order of the best fitting polynomial is not necessarily advanta-
geous. This is because a model with many parameters might fit unpredictable
fluctuations of the data and have poor forecasting performance.
In general, a polynomial can approximate any set of data with arbitrary
precision provided that we choose a sufficiently high degree for the approxi-
mating polynomial. For example, Figure 13.5 illustrates the approximation
to our sample data obtained with a polynomial of degree 10. As the number
of parameters of the polynomial is equal to the number of data points in our
sample, the fit is perfect and the residuals are zero.
However, Figure 13.5 well illustrates that the best fitting polynomial
of degree 10 will not be good at approximating new data as it moves far
from the data immediately after the rightmost point in the sample data. This
is a fundamental aspect of estimation methods. Using models with many
parameters, for instance approximating with polynomials of high degree,
we can fit sample data very well. However, performance in representing or
forecasting out-of-sample data will be poor. Model estimation is always a
compromise between accuracy of estimates in-sample and model parsimony.

Free download pdf