Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
322 Consistency and Limiting Distributions

Definition 5.1.1.Let{Xn}be a sequence of random variables and letXbe a ran-
dom variable defined on a sample space. We say thatXnconverges in probability
toXif, for all > 0 ,
lim
n→∞
P[|Xn−X|≥ ]=0,


or equivalently,
lim
n→∞
P[|Xn−X|< ]=1.


If so, we write


Xn→P X.

IfXn→P X, we often say that the mass of the differenceXn−Xis converging
to 0. In statistics, often the limiting random variableXis a constant; i.e.,Xis a
degenerate random variable with all its mass at some constanta.Inthiscase,we
writeXn→P a. Also, as Exercise 5.1.1 shows, for a sequence of real numbers{an},

an→ais equivalent toan→P a.
One way of showing convergence in probability is to use Chebyshev’s Theorem
(1.10.3). An illustration of this is given in the following proof. To emphasize the fact
that we are working with sequences of random variables, we may place a subscript
non the appropriate random variables; for example, writeXasXn.


Theorem 5.1.1(Weak Law of Large Numbers).Let{Xn}be a sequence of iid
random variables having common meanμand variance σ^2 < ∞.LetXn =
n−^1


∑n
i=1Xi.Then
Xn→P μ.

Proof: From expression (2.8.6) of Example 2.8.1, the mean and variance ofXnare
μandσ^2 /n, respectively. Hence, by Chebyshev’s Theorem, we have for every >0,


P[|Xn−μ|≥ ]=P[|Xn−μ|≥(


n/σ)(σ/


n)]≤

σ^2
n ^2

→ 0.

This theorem says that all the mass of the distribution ofXnis converging toμ,
asn→∞.Inasense,fornlarge,Xnis close toμ. But how close? For instance, if
we were to estimateμbyXn, what can we say about the error of estimation? We
answer this in Section 5.3.
Actually, in a more advanced course, a Strong Law of Large Numbers is proved;
see page 124 of Chung (1974). One result of this theorem is that we can weaken the
hypothesis of Theorem 5.1.1 to the assumption that the random variablesXiare
independent and each has finite meanμ. Thus the Strong Law of Large Numbers
is a first moment theorem, while the Weak Law requires the existence of the second
moment.
There are several theorems concerning convergence in probability which will
be useful in the sequel. Together the next two theorems say that convergence in
probability is closed under linearity.


Theorem 5.1.2.SupposeXn
P
→XandYn
P
→Y.ThenXn+Yn
P
→X+Y.

Free download pdf