358 Nonlinear Programming II: Unconstrained Optimization Techniques
Therefore,
λ ̃∗i = 2. 0 +
− 113. 95 − 24. 41 + 143. 2
− 113. 95 + 174. 68 − 48. 82
( 2. 5 − 2. 0 )
= 2. 2
Stage 4: To find whetherλ ̃∗ 1 is close toλ∗ 1 , we test the value ofdf/dλ 1.
df
dλ 1
∣
∣
∣
∣ ̃
λ∗ 1
=− 0. 818
Also,
f (λ 1 =λ ̃∗ 1 ) = 216. 1
Sincedf/dλ 1 is not close to zero atλ ̃∗ 1 , we use a refitting technique.
Second Fitting:Now we takeA=λ ̃∗ 1 since df/dλ 1 is negative atλ ̃∗ 1 andB= 2. 5.
Thus
A= 2. 2 , fA= 162. 10 , fA′= − 0. 818
B= 2. 5 , fB= 412. 51 , fB′= 741. 68
With these values we find that
Z=
3 ( 216. 1 − 241. 51 )
2. 5 − 2. 2
− 2. 818 + 174. 68 = − 80. 238
Q=[( 80. 238 )^2 + ( 0. 818 )( 174. 68 )]^1 /^2 = 18. 1
λ ̃∗ 1 = 2. 2 + −^0.^818 −^80.^238 +^81.^1
− 0. 818 + 174. 68 − 160. 476
( 2. 5 − 2. 2 )= 2. 201
To test for convergence, we evaluatedf/dλatλ ̃∗ 1. Since df/dλ|λ 1 =λ ̃∗
1
=− 0 .211, it can
be assumed to be sufficiently close to zero and hence we takeλ∗ 1 ≃λ ̃∗ 1 = 2. 2 01. This
gives
X 2 =X 1 +λ∗ 1 S 1 =
{
− 2 + 0. 970 λ∗ 1
− 2 + 0. 244 λ∗ 1
}
=
{
0. 135
− 1. 463
}
TestingX 2 for convergence:To test whether the D-F-P method has converged,
we compute the gradient off atX 2 :
∇f 2 =
{
∂f /∂x 1
∂f /∂x 2
}
X 2
=
{
78. 29
− 296. 24
}
As the components of this vector are not close to zero,X 2 is not optimum and hence
the procedure has to be continued until the optimum point is found.
Example 6.15 Minimizef (x 1 , x 2 )=x 1 −x 2 + 2 x 12 + 2 x 1 x 2 +x 22 from the starting
pointX 1 =
{ 0
0
}
using the DFP method with
[B 1 ]=
[
1 0
0 1
]
ε= 0. 01