Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

290 Nonlinear Programming I: One-Dimensional Minimization Methods


Iteration 2

f 2 = f(λ 2 ) =− 0. 303368 , f 2 + =f(λ 2 + λ)= − 0. 304662 ,

f 2 −= f(λ 2 − λ)= − 0. 301916

λ 3 =λ 2 −

λ(f 2 +−f 2 −)
2 (f 2 +− 2 f 2 +f 2 −)

= 0. 465390

Convergence check:

|f′(λ 3 ) =|





f 3 +−f 3 −
2 λ




∣=^0.^017700 >ε

Iteration 3

f 3 = f(λ 3 ) =− 0. 309885 , f 3 + =f(λ 3 + λ)= − 0. 310004 ,

f 3 −= f(λ 3 − λ)= − 0. 309650

λ 4 =λ 3 −

λ(f 3 +−f 3 −)
2 (f 3 +− 2 f 3 +f 3 −)

= 0. 480600

Convergence check:

|f′(λ 4 ) =|





f 4 +−f 4 −
2 λ




∣=^0.^000350 < ε

Since the process has converged, we take the optimum solution asλ∗≈λ 4 = 0. 4 80600.

5.12.3 Secant Method


The secant method uses an equation similar to Eq. (5.64) as

f′(λ)=f′(λi) +s(λ−λi)= 0 (5.71)

wheresis the slope of the line connecting the two points (A, f′(A)) and (B, f′(B)),
whereAandBdenote two different approximations to the correct solution,λ∗. The
slopescan be expressed as (Fig. 5.19)

s=

f′(B)−f′(A)
B−A

(5.72)

Equation (5.71) approximates the functionf′(λ) betweenAandBas a linear equation
(secant), and hence the solution of Eq. (5.71) gives the new approximation to the root
off′(λ) as

λi+ 1 =λi−

f′(λi)
s

=A−

f′(A)(B −A)
f′(B)−f′(A)

(5.73)

The iterative process given by Eq. (5.73) is known as thesecant method(Fig. 5.19).
Since the secant approaches the second derivative off (λ)atAasBapproachesA,
Free download pdf