274 Nonlinear Programming I: One-Dimensional Minimization Methods
that is,
λ ̃∗= −b
2 c
(5.30)
The sufficiency condition for the minimum ofh(λ)is that
d^2 h
dλ^2
∣
∣
∣
∣ ̃
λ∗
> 0
thatis,
c> 0 (5.31)
To evaluate the constantsa, b, andcin Eq. (5.29), we need to evaluate the function
f (λ)at three points. Letλ=A, λ=B, andλ=Cbe the points at which the function
f (λ)is evaluated and letfA, fB, andfCbe the corresponding function values, that is,
fA= a+bA+cA^2
fB= a+bB+cB^2
fC= a+bC+cC^2 (5.32)
The solution of Eqs. (5.32) gives
a=
fA BC(C−B)+fBCA(A −C)+fCAB(B −A)
(A−B)(B−C)(C−A)
(5.33)
b=
fA(B^2 −C^2 )+fB(C^2 −A^2 )+fC(A^2 −B^2 )
(A−B)(B−C)(C−A)
(5.34)
c= −
fA (B−C)+fB(C −A)+fC(A −B)
(A−B)(B−C)(C−A)
(5.35)
From Eqs. (5.30), (5.34), and (5.35), the minimum ofh(λ)can be obtained as
λ ̃∗=−b
2 c
=
fA(B^2 −C^2 )+fB(C^2 −A^2 )+fC(A^2 −B^2 )
2[fA(B −C)+fB(C −A)+fC(A −B)]
(5.36)
provided thatc, as given by Eq. (5.35), is positive.
To start with, for simplicity, the pointsA, B, andCcan be chosen as 0,t, and 2t,
respectively, wheretis a preselected trial step length. By this procedure, we can save
one function evaluation sincefA= f(λ=0) is generally known from the previous
iteration (of a multivariable search). For this case, Eqs. (5.33) to (5.36) reduce to
a=fA (5.37)
b=
4 fB− 3 fA−fC
2 t
(5.38)
c=
fC+fA− 2 fB
2 t^2
(5.39)
λ ̃∗=^4 fB−^3 fA−fC
4 fB− 2 fC− 2 fA
t (5.40)