366 The Basics of financial economeTrics
If we are concerned about some estimator’s properties, we may reason-
ably have to remain undecided about the selection of the most suitable esti-
mator for the estimation problem we are facing. In the fortunate cases, the
uncertainty regarding an estimator’s quality may vanish as n goes to infinity,
so that we can base conclusions concerning its applicability for certain esti-
mation tasks on its large-sample properties.
The central limit theorem plays a crucial role in assessing the properties
of estimators. This is because normalized sums turn into standard normal
random variables, which provide us with tractable quantities. The asymp-
totic properties of normalized sums may facilitate deriving the large-sample
behavior of more complicated estimators.
At this point, we need to think about a rather technical concept that
involves controlling the behavior of estimators in the limit. Here we will
analyze an estimator’s convergence characteristics. That means we consider
whether the distribution of an estimator approaches some particular prob-
ability distribution as the sample sizes increase. To do so, we state the fol-
lowing definition:
Convergence in probability. We say that a random variable such as an
estimator built on a sample of size n, θˆn, is a convergence in prob-
ability to some constant c if
θ−>ε=
→∞
lim(Pc|ˆ |) 0
n n
(C.2)
holds for any ε > 0.
Equation (C.2) states that as the sample size becomes arbitrarily large,
the probability that our estimator will assume a value that is more than ε
away from c will become increasingly negligible, even as ε becomes smaller.
Instead of the rather lengthy form of equation (C.2), we usually state that
θˆn converges in probability to c more briefly as
plimθ=ˆn c (C.3)
Here, we introduce the index n to the estimator θˆn to indicate that it depends
on the sample size n. Convergence in probability does not mean that an esti-
mator will eventually be equal to c, and hence constant itself, but the chance
of a deviation from it will become increasingly unlikely.
Suppose now that we draw several samples of size n. Let the num-
ber of these different samples be N. Consequently, we obtain N estimates
θθˆ(1)(2)nn,ˆ ,,...θˆ()nN where θˆn(1) is estimated on the first sample, θˆn(2) on the sec-
ond, and so on. Utilizing the prior definition, we formulate the following law.