Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
350 Consistency and Limiting Distributions

which is the desired result.
Conversely, ifXnj →P Xjfor allj=1,...,p, then by the second part of the
inequality (5.4.3),


≤‖Xn−X‖≤

∑p

i=1

|Xnj−Xj|,

for any >0. Hence


limn→∞P[‖Xn−X‖≥ ] ≤ limn→∞P[

∑p

j=1

|Xnj−Xj|≥ ]


∑p

j=1

limn→∞P[|Xnj−Xj|≥/p]=0.

Based on this result, many of the theorems involving convergence in probability
can easily be extended to the multivariate setting. Some of these results are given
in the exercises. This is true of statistical results, too. For example, in Section
5.2, we showed that ifX 1 ,...,Xnis a random sample from the distribution of a
random variableXwith mean,μ, and variance,σ^2 ,thenXnandS^2 nare consistent
estimates ofμandσ^2. By the last theorem, we have that (Xn,Sn^2 )isaconsistent
estimate of (μ, σ^2 ).
As another simple application, consider the multivariate analog of the sample
mean and sample variance. Let{Xn}be a sequence of iid random vectors with
common mean vectorμand variance-covariance matrixΣ. Denote the vector of
means by


Xn=

1
n

∑n

i=1

Xi. (5.4.5)

Of course,Xnis just the vector of sample means, (X 1 ,...,Xp)′.BytheWeakLaw
of Large Numbers, Theorem 5.1.1,Xj→μj, in probability, for eachj. Hence, by
Theorem 5.4.1,Xn→μ, in probability.
How about the analog of the sample variances? LetXi=(Xi 1 ,...,Xip)′. Define
the sample variances and covariances by


Sn,j^2 =

1
n− 1

∑n

i=1

(Xij−Xj)^2 , forj=1,...,p, (5.4.6)

Sn,jk =

1
n− 1

∑n

i=1

(Xij−Xj)(Xik−Xk), forj =k=1,...,p.(5.4.7)

Assuming finite fourth moments, the Weak Law of Large Numbers shows that all
these componentwise sample variances and sample covariances converge in proba-
bility to distribution variances and covariances, respectively. As in our discussion
after the Weak Law of Large Numbers, the Strong Law of Large Numbers implies
that this convergence is true under the weaker assumption of the existence of finite
Free download pdf