5.4.∗Extensions to Multivariate Distributions 349v′=(v 1 ,...,vp)asv=∑pi=1viei.The following lemma will be useful:
Lemma 5.4.1.Letv′=(v 1 ,...,vp)be any vector inRp.Then
|vj|≤‖v‖≤∑ni=1|vi|, for allj=1,...,p. (5.4.3)Proof: Note that for allj,v^2 j≤∑pi=1v^2 i=‖v‖^2 ;hence, taking the square root of this equality leads to the first part of the desired
inequality. The second part is‖v‖=∥
∥
∥
∥
∥∑pi=1viei∥
∥
∥
∥
∥≤∑pi=1|vi|‖ei‖=∑pi=1|vi|.Let{Xn}denote a sequence ofp-dimensional vectors. Because the absolute
value is the Euclidean norm inR^1 , the definition of convergence in probability for
random vectors is an immediate generalization:Definition 5.4.1.Let{Xn}be a sequence ofp-dimensional vectors and letXbe a
random vector, all defined on the same sample space. We say that{Xn}converges
in probability toXif
lim
n→∞
P[‖Xn−X‖≥ ]=0, (5.4.4)
for all> 0. As in the univariate case, we writeXn
P
→X.As the next theorem shows, convergence in probability of vectors is equivalent
to componentwise convergence in probability.
Theorem 5.4.1.Let{Xn}be a sequence ofp-dimensional vectors and letXbe a
random vector, all defined on the same sample space. Then
Xn
P
→Xif and only ifXnj
P
→Xjfor allj=1,...,p.Proof:This follows immediately from Lemma 5.4.1. SupposeXn
P
→X. For anyj,
from the first part of the inequality (5.4.3), we have, for>0,≤|Xnj−Xj|≤‖Xn−X‖.Hence
limn→∞P[|Xnj−Xj|≥ ]≤limn→∞P[‖Xn−X‖≥ ]=0,