Computational Physics - Department of Physics

(Axel Boer) #1

7.6 Power Methods 225


us introduce a transformationS 1


S 1 =





cosθ 0 0 sinθ
0 0 0 0
0 0 0 0
−sinθ0 0 cosθ





Then the similarity transformation


ST 1 AS 1 =A′=





d 1 ′e′ 1 0 0
e′ 1 d 2 e 2 0
0 e 2 d 3 e′ 3
0 0 e′ 3 d 4 ′





produces a matrix where the primed elements inA′have been changed by the transformation
whereas the unprimed elements are unchanged. If we now chooseθ to give the element
a′ 21 =e′= 0 then we have the first eigenvalue=a′ 11 =d′ 1.
This procedure can be continued on the remaining three-dimensional submatrix for the
next eigenvalue. Thus after four transformations we have the wanted diagonal form.


7.6 Power Methods


We assumeAˆ can be diagonalized. Letλ 1 ,λ 2 ,...,λnbe theneigenvalues (counted with
multiplicity) ofAˆand letv 1 ,v 2 ,...,vnbe the corresponding eigenvectors. We assume thatλ 1
is the dominant eigenvalue, so that|λ 1 |>|λj|forj> 1.
The initial vectorb 0 can be written:


b 0 =c 1 v 1 +c 2 v 2 +···+cmvm.

Ifb 0 is chosen randomly (with uniform probability), thenc 1 âL’ ̆a 0 with probability 1. Now,


Akb 0 =c 1 Akv 1 +c 2 Akv 2 +···+cmAkvm
=c 1 λ 1 kv 1 +c 2 λ 2 kv 2 +···+cmλmkvm
=c 1 λ 1 k

(

v 1 +cc^21


λ^2
1

)k
v 2 +···+ccm 1


m
λ 1

)k
vm

)

.

The expression within parentheses converges tov 1 because|λj/λ 1 |< 1 forj> 1. On the other
hand, we have


bk=
Akb 0
‖Akb 0 ‖

.

Therefore,bkconverges to (a multiple of) the eigenvectorv 1. The convergence is geometric,
with ratio ∣
∣∣


λ 2
λ 1

∣∣

∣∣,

whereλ 2 denotes the second dominant eigenvalue. Thus, the method converges slowly if
there is an eigenvalue close in magnitude to the dominant eigenvalue.
Under the assumptions:



  • A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues

  • The starting vectorb 0 has a nonzero component in the direction of an eigenvector associ-
    ated with the dominant eigenvalue.

Free download pdf