Anon

(Dana P.) #1

362 The Basics of financial economeTrics


based on nothing but luck; in the best case, a guess might be justified by
some experience. However, this approach is hardly analytical. Instead, we
should use the information obtained from the sample, or better, the statistic.
When we are interested in the estimation of a particular parameter θ,
we typically do not refer to the estimation function as a statistic but rather
as an estimator and denote it by θ→ˆ:X Θ. This means that the estimator
is a function from the sample space X mapping into the parameter space
Θ. The estimator can be understood as some instruction of how to process
the sample to obtain a valid representative of the parameter θ. The exact
structure of the estimator is determined before the sample is realized. After
the estimator has been defined, we simply need to enter the sample values
accordingly.
Due to the estimator’s dependence on the random sample, the estimator
is itself random. A particular value of the estimator based on the realization
of some sample is called an estimate. For example, if we realize 1,000 samples
of given length n, we obtain 1,000 individual estimates θˆi, i = 1, 2,... , 1,000.
Sorting them by value, we can compute the distribution function of these
realizations, which is similar to the empirical cumulative distribution func-
tion explained in Appendix A. Technically, this distribution function is not
the same as the theoretical sampling distribution for this estimator for given
sample length n introduced earlier. For increasing n, however, the distribu-
tion of the realized estimates will gradually become more and more similar in
appearance to the sampling distribution.


Linear estimators We turn to a special type of estimator, the linear estima-
tor. Suppose we have a sample of size n such that X = (X 1 , X 2 ,... , Xn). The
linear estimator then has the following form:


θ=∑
=

ˆ aXii
i

n

1

where each draw Xi is weighted by some real ai, for i = 1, 2,... , n. By con-
struction, the linear estimator weights each draw Xi by some weight ai. The
usual constraints on the ai is that they sum to 1, that is,


∑i=ai=^1


n
1

A particular version of the linear estimator is the sample mean where all ai = 1/n.
Let’s look at a particular distribution of the Xi, the normal distribution.
As we know from Appendix B, this distribution can be expressed in closed
form under linear affine transformation by Properties 1 and 2. That is, by
adding several Xi and multiplying the resulting sum by some constant, we

Free download pdf