Inferential Statistics 363
once again obtain a normal random variable. Thus, any linear estimator will
be normal. This is an extremely attractive feature of the linear estimator.
Even if the underlying distribution is not the normal distribution,
according to the central limit theorem as explained in Appendix B, the sam-
ple mean (i.e., when ai = 1/n) will be approximately normally distributed
as the sample size increases. This result facilitates parameter estimation for
most distributions.
What sample size n is sufficient? If the population distribution is sym-
metric, it will not require a large sample size, often less than 10. If, however,
the population distribution is not symmetric, we will need larger samples.
In general, an n between 25 and 30 suffices. One is on the safe side when n
exceeds 30.
The central limit theorem requires that certain conditions on the popu-
lation distribution are met, such as finiteness of the variance. If the variance
or even mean do not exist, another theorem, the so-called generalized cen-
tral limit theorem, can be applied under certain conditions. However, these
conditions are beyond the scope of this book. But we will give one example.
The class of α-stable distributions provides such a limiting distribution, that
is, one that certain estimators of the form
∑i=aXii
n
1
will approximately follow as n increases. We note this distribution because
it is one that has been suggested by financial economists as a more general
alternative to the Gaussian distribution to describe returns on financial assets.
Quality Criteria of estimators
The question related to each estimation problem should be what estimator
would be best suited for the problem at hand. Estimators suitable for the
very same parameters can vary quite remarkably when it comes to qual-
ity of their estimation. Here we will explain some of the most commonly
employed quality criteria.
Bias An important consideration in the selection of an estimator is the
average behavior of that estimator over all possible scenarios. Depending
on the sample outcome, the estimator may not equal the parameter value
and, instead, be quite remote from it. This is a natural consequence of the
variability of the underlying sample. However, the average value of the esti-
mator is something we can control.
Let us begin by considering the sampling error that is the difference
between the estimate and the population parameter. This distance is random