Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
6.10 Conjugate Gradient (Fletcher–Reeves) Method 343

6.10.2 Fletcher–Reeves Method


The iterative procedure of Fletcher–Reeves method can be stated as follows:
1.Start with an arbitrary initial pointX 1.
2 .Set the first search directionS 1 = −∇ f(X 1 ) =−∇f 1.
3 .Find the pointX 2 according to the relation

X 2 =X 1 +λ∗ 1 S 1 (6.80)

whereλ∗ 1 is the optimal step length in the directionS 1. Set i= 2 and go to the
next step.
4.Find∇fi= ∇ f(Xi) and set,

Si= −∇fi+

|∇fi|^2
|∇fi− 1 |^2

Si− 1 (6.81)

5 .Compute the optimum step lengthλ∗iin the directionSi, and find the new point

Xi+ 1 =Xi+λ∗iSi (6.82)
6 .Test for the optimality of the pointXi+ 1. IfXi+ 1 is optimum, stop the process.
Otherwise, set the value ofi=i+1 and go to step 4.
Remarks:
1.The Fletcher–Reeves method was originally proposed by Hestenes and Stiefel
[6.14] as a method for solving systems of linear equations derived from the
stationary conditions of a quadratic. Since the directionsSiused in this method
areA-conjugate, the process should converge inncycles or less for a quadratic
function. However, for ill-conditioned quadratics (whose contours are highly
eccentric and distorted), the method may require much more thanncycles for
convergence. The reason for this has been found to be the cumulative effect
of rounding errors. SinceSiis given by Eq. (6.81), any error resulting from
the inaccuracies involved in the determination ofλ∗i, and from the round-off
error involved in accumulating the successive|∇fi|^2 Si− 1 /| ∇fi− 1 |^2 terms, is
carried forward through the vectorSi. Thus the search directionsSi will be
progressively contaminated by these errors. Hence it is necessary, in practice,
to restart the method periodically after every, say,msteps by taking the new
search direction as the steepest descent direction. That is, after everymsteps,
Sm+ 1 is set equal to−∇fm+ 1 instead of the usual form. Fletcher and Reeves
have recommended a value ofm=n+1, wherenis the number of design
variables.
2.Despite the limitations indicated above, the Fletcher–Reeves method is vastly
superior to the steepest descent method and the pattern search methods, but
it turns out to be rather less efficient than the Newton and the quasi-Newton
(variable metric) methods discussed in the latter sections.

Example 6.9 Minimizef (x 1 , x 2 )=x 1 −x 2 + 2 x 12 + 2 x 1 x 2 +x 22 starting from the
pointX 1 =

{ 0

0

}

.
Free download pdf