Anon

(Dana P.) #1

Factor Analysis and Principal Components Analysis 255


That is, we can exactly obtain the data multiplying principal components
by the transpose of the matrix of eigenvectors. Let’s write it down explicitly:


XPCV PCVPCV PCV

XPi CV

11 == 1112 ++ 12 + 818

=

(')

(')





iiiPCVPCViiPCV

XPCV PCV

=+++

==

11 22 88

88181





(') +++PC 28 VP 28 + CV 88

(12.15)

That is, the ith data time series is a weighted sum of principal components,
with the jth weight equal to the ith component of the jth eigenvector. Con-
trast these weights with those of the representation of PCs in terms of data:


PC XV XV XV XV

PCiiXV XV

11111221 881

1

==+++

==

()

()





112288

88118228

iiXV XVi

PC XV XV XV X

+++

==+++





()  8888 V

where the ith principal component is a weighted sum of data, with the jth
weight equal to the jth component of the ith eigenvector.
If we compute the variance of each principal component, we obtain the
following vector:


var=[]0.0381 0.0541 0.1131 0.1563 0.2221 0.4529 1.3474 5.6160' (12.16)

We can immediately see that the variance of each principal component is
equal to the corresponding eigenvalue: var()PCii==Diiλ. This is a general
property: variance of a principal component is equal to the corresponding
eigenvalue of the covariance matrix of data.


Step 4: Look at the decay of the Variances of principal Components The magni-
tude of the variances of principal components differs greatly across differ-
ent principal components. The smallest variance, corresponding to PC1, is
0.0381 while the largest, corresponding to PC8, is 5.6. Panels A and B of
Figure 12.1 show the plots of the first and last principal components and a
plot of the eigenvalues.
We now understand why it is important that principal components be
orthogonal vectors. Because principal components are orthogonal vectors,

Free download pdf