Robert_V._Hogg,_Joseph_W._McKean,_Allen_T._Craig

(Jacob Rumans) #1
5.4.∗Extensions to Multivariate Distributions 349

v′=(v 1 ,...,vp)as

v=

∑p

i=1

viei.

The following lemma will be useful:


Lemma 5.4.1.Letv′=(v 1 ,...,vp)be any vector inRp.Then


|vj|≤‖v‖≤

∑n

i=1

|vi|, for allj=1,...,p. (5.4.3)

Proof: Note that for allj,

v^2 j≤

∑p

i=1

v^2 i=‖v‖^2 ;

hence, taking the square root of this equality leads to the first part of the desired
inequality. The second part is

‖v‖=






∑p

i=1

viei







∑p

i=1

|vi|‖ei‖=

∑p

i=1

|vi|.

Let{Xn}denote a sequence ofp-dimensional vectors. Because the absolute
value is the Euclidean norm inR^1 , the definition of convergence in probability for
random vectors is an immediate generalization:

Definition 5.4.1.Let{Xn}be a sequence ofp-dimensional vectors and letXbe a
random vector, all defined on the same sample space. We say that{Xn}converges
in probability toXif
lim
n→∞
P[‖Xn−X‖≥ ]=0, (5.4.4)


for all> 0. As in the univariate case, we writeXn
P
→X.

As the next theorem shows, convergence in probability of vectors is equivalent
to componentwise convergence in probability.


Theorem 5.4.1.Let{Xn}be a sequence ofp-dimensional vectors and letXbe a
random vector, all defined on the same sample space. Then


Xn
P
→Xif and only ifXnj
P
→Xjfor allj=1,...,p.

Proof:This follows immediately from Lemma 5.4.1. SupposeXn
P
→X. For anyj,
from the first part of the inequality (5.4.3), we have, for >0,

≤|Xnj−Xj|≤‖Xn−X‖.

Hence
limn→∞P[|Xnj−Xj|≥ ]≤limn→∞P[‖Xn−X‖≥ ]=0,
Free download pdf