Principles of Mathematics in Operations Research

(Rick Simeone) #1
Ai

= ai

I A:'

Xl

6.2 Computation of eigen values 87

The vectors uk point more and more accurately towards the direction of xn
and the convergence factor is r = ' ,'^~}'.


Example 6.2.1 (Markov Process, continued) Recall Example 4-4-6:


«4

A

'0
0

=

'l'
0

81'
182

"0.95 0.01"
0.05 0.99

. "I =


r29i"
1709 1

=>

0.95'
0.05

Ai =^1 <H-

, "2 =

, "210 -


  • l •
    5
    1


0.903
0.097

0.16666
0.83333

= vi, A 2

, "3 =

7]


= 0.94

"0.85882
0.14118

" l"
6
5
6

= av\.

The convergence rate is quite low r = 0.94 = 2y^ = L21 Since the power
method is designed especially for large sparse matrices, it converges after 210
iterations if the significance level is six digits after the decimal point.


Remark 6.2.2 (How to increase r) If r « 1, the convergence is slow. If
|An_i| = |Ara|, no convergence at all. There are some methods to increase the
convergence rate:


i. Block power method: Work with several vectors at once. Start with p or-
thonormal vectors, multiply by A, then apply Gram-Schmidt to orthogo-
nalize again. Then, we have r' = ' ,^~!''.


ii. Inverse power method: Operate with A^^1 instead of A. vk+i — A~xvk =>
Avk+i = Vk (save L and U!). The convergence rate is r" = Ml, provided
that r" < 1. This method guarantees convergence to the smallest eigen
vector.
Hi. Shifted inverse power method: The best method. Let A be replaced by A —
PI. All of the eigen values are shifted by (3. Consequently, r'" = K'la.
// we choose j3 as a good approximation to Xi, the convergence will be
accelerated.


(A - Pl)wk+i =wk =

OL\X\
+

OL2X2
(Xi-P)k (A 2 -/3)fc + ••• +

Qn^n
(A„ -- R\k- (3)

// we know /3, then we may use A — [31 = LU and solve Ux\
(1,1, • • • , 1)T by back substitution. We can choose (3 = (3k at each step
3 (A - /3kI)wk+i = wk. If A = AT, /3k = R{uk)
get the cubic convergence.

M,tr "*, then we will
Free download pdf