360 Nonlinear Programming II: Unconstrained Optimization Techniques
[N 1 ] =−
([B 1 ]g 1 )([B 1 ]g 1 )T
gT 1 [B 1 ]g 1
= −
{
− 2
0
}
{−^20 }
4
= −
1
4
[
4 0
0 0
]
= −
[
1 0
0 0
]
[B 2 ]=[B 1 ]+[M 1 ]+[N 1 ]=
[
10
0 1
]
+
1
2
−
1
2
−
1
2
1
2
+
[
−1 0
0 0
]
=
[
0. 5 − 0. 5
− 0 .5 1. 5
]
Iteration 2 (i= 2 )
The next search direction is determined as
S 2 = −[B 2 ]∇f 2 = −
[
0. 5 − 0. 5
− 0. 5 1. 5
]{
− 1
− 1
}
=
{
0
1
}
To find the minimizing step lengthλ∗ 2 alongS 2 , we minimize
f(X 2 +λ 2 S 2 )=f
({
− 1
1
}
+λ 2
{
0
1
)}
=f
({
− 1
1 +λ 2
})
=− 1 −( 1 +λ 2 )+ 2 (− 1 )^2 + 2 (− 1 )( 1 +λ 2 )+( 1 +λ 2 )^2
=λ^22 −λ 2 − 1
with respect toλ 2. Since df/dλ 2 = at 0 λ∗ 2 =^12 , we obtain
X 3 =X 2 +λ∗ 2 =
{
− 1
1
}
+
1
2
{
0
1
}
=
{
− 1
1. 5
}
This point can be identified to be optimum since
∇f 3 =
{
0
0
}
and ||∇f 3 || = 0 <ε
6.15 Broyden–Fletcher–Goldfarb–Shanno Method
As stated earlier, a major difference between the DFP and BFGS methods is that in
the BFGS method, the Hessian matrix is updated iteratively rather than the inverse of
the Hessian matrix. The BFGS method can be described by the following steps.
1.Start with an initial pointX 1 and an×npositive definite symmetric matrix [B 1 ]
as an initial estimate of the inverse of the Hessian matrix off. In the absence
of additional information, [B 1 ] is taken as the identity matrix [I]. Compute the
gradient vector∇f 1 = ∇ f(X 1 ) nd set the iteration number asa i=1.
2.Compute the gradient of the function,∇fi, at pointXi, and set
Si= −[Bi]∇fi (6.134)