Computational Physics - Department of Physics

(Axel Boer) #1

226 7 Eigensystems


then:



  • A subsequence of(bk)converges to an eigenvector associated with the dominant eigen-
    value


Note that the sequence(bk)does not necessarily converge. It can be shown thatbk=eiφkv 1 +
rkwhere:v 1 is an eigenvector associated with the dominant eigenvalue,and‖rk‖→ 0. The
presence of the termeiφkimplies that(bk)does not converge unlesseiφk= 1. Under the two
assumptions listed above, the sequence(μk)defined byμk=b


∗kAbk
b∗kbk converges to the dominant
eigenvalue.
Power iteration is not used very much because it can find only the dominant eigenvalue.
The algorithm is however very useful in some specific case. For instance, Google uses it
to calculate the page rank of documents in their search engine. For matrices that are well-
conditioned and as sparse as the web matrix, the power iteration method can be more efficient
than other methods of finding the dominant eigenvector.
Some of the more advanced eigenvalue algorithms can be understood as variations of
the power iteration. For instance, the inverse iteration method applies power iteration to
the matrixAˆ−^1. Other algorithms look at the whole subspace generated by the vectorsbk.
This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or
Lanczos iteration. The latter is method of choice for diagonalizing symmetric matrices with
huge dimensionalities. We discuss the Lanczos algorithm inthe next section.


7.7 Iterative methods: Lanczos’ algorithm


The Lanczos algorithm is applied to symmetric eigenvalue problems. The basic features with
a real symmetric matrix (and normally hugen> 106 and sparse)Aˆof dimensionn×nare



  • The Lanczos’ algorithm generates a sequence of real tridiagonal matricesTkof dimension
    k×kwithk≤n, with the property that the extremal eigenvalues ofTkare progressively
    better estimates ofAˆ’ extremal eigenvalues.

  • The method converges to the extremal eigenvalues.

  • The similarity transformation is
    Tˆ=QˆTAˆQˆ,
    with the first vectorQˆeˆ 1 =qˆ 1.


We are going to solve iteratively
Tˆ=QˆTAˆQˆ,


with the first vectorQˆeˆ 1 =qˆ 1. We can then write out the matrixQˆin terms of its column
vectors
Qˆ= [qˆ 1 qˆ 2 ...qˆn].
The matrix
Tˆ=QˆTAˆQˆ,


can be written as


Tˆ=









α 1 β 1 0 ... ... 0
β 1 α 2 β 2 0 ... 0
0 β 2 α 3 β 3 ... 0
... ... ... ... ... 0
... βn− 2 αn− 1 βn− 1
0 ... ... 0 βn− 1 αn








Free download pdf