Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
6.15 Broyden–Fletcher–Goldfarb–Shanno Method 361

3.Find the optimal step lengthλ∗iin the directionSiand set

Xi+ 1 =Xi+λ∗iSi (6.135)

4 .Test the pointXi+ 1 for optimality. If||∇fi+ 1 || ≤ ε,whereεis a small quantity,
takeX∗≈Xi+ 1 and stop the process. Otherwise, go to step 5.
5 .Update the Hessian matrix as

[Bi+ 1 ]=[Bi]+

(

1 +

gTi[Bi]gi
dTigi

)

didTi
dTigi


digTi[Bi]
dTigi


[Bi]gidTi
dTigi

(6.136)

where
di=Xi+ 1 −Xi=λ∗iSi (6.137)

gi= ∇fi+ 1 − ∇fi= ∇ f(Xi+ 1 ) −∇f (Xi) (6.138)

6.Set the new iteration number asi=i+1 and go to step 2.

Remarks:


1.The BFGS method can be considered as a quasi-Newton, conjugate gradient,
and variable metric method.
2.Since the inverse of the Hessian matrix is approximated, the BFGS method can
be called an indirect update method.
3.If the step lengthsλ∗iare found accurately, the matrix, [Bi], retains its positive
definiteness as the value ofiincreases. However, in practical application, the
matrix [Bi] might become indefinite or even singular ifλ∗i are not found accu-
rately. As such, periodical resetting of the matrix [Bi] to the identity matrix [I]
is desirable. However, numerical experience indicates that the BFGS method is
less influenced by errors inλ∗i than is the DFP method.
4 .It has been shown that the BFGS method exhibits superlinear convergence near
X* [6.19].

Example 6.16 Minimizef (x 1 , x 2 )=x 1 −x 2 + 2 x 12 + 2 x 1 x 2 +x^22 from the starting


pointX 1 =


{ 0

0

}

using the BFGS method with

[B 1 ]=

[

1 0

0 1

]

ε= 0. 01.

SOLUTION

Iteration 1 (i= 1 )


Here


∇f 1 = ∇ f(X 1 )=

{

1 + 4 x 1 + 2 x 2
− 1 + 2 x 1 + 2 x 2

}∣∣



( 0 , 0 )

=

{

1

− 1

}

and hence


S 1 = −[B 1 ]∇f 1 = −

[

1 0

0 1

]{

1

− 1

}

=

{

− 1

1

}
Free download pdf