Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

342 Nonlinear Programming II: Unconstrained Optimization Techniques


We have seen that Powell’s conjugate direction method requiresnsingle-variable
minimizations per iteration and sets up a new conjugate direction at the end of each
iteration. Thus it requires, in general,n^2 single-variable minimizations to find the mini-
mum of a quadratic function. On the other hand, if we can evaluate the gradients of the
objective function, we can set up a new conjugate direction after every one-dimensional
minimization, and hence we can achieve faster convergence. The construction of con-
jugate directions and development of the Fletcher–Reeves method are discussed in this
section.

6.10.1 Development of the Fletcher–Reeves Method


The Fletcher–Reeves method is developed by modifying the steepest descent method
to make it quadratically convergent. Starting from an arbitrary pointX 1 , the quadratic
function

f (X)=^12 XT[A]X+BTX+C (6.74)

can be minimized by searching along the search directionS 1 = −∇f 1 (steepest descent
direction) using the step length (see Problem 6.40):

λ∗ 1 = −

ST 1

ST 1

∇f 1
AS 1

(6.75)

The second search directionS 2 is found as a linear combination ofS 1 and −∇f 2 :

S 2 = −∇f 2 +β 2 S 1 (6.76)

where the constantβ 2 can be determined by makingS 1 andS 2 conjugate with respect
to [A]. This leads to (see Problem 6.41):

β 2 = −

∇f 2 T∇f 2
∇f 1 TS 1

=

∇f 2 T∇f 2
∇f 1 T∇f 1

(6.77)

This process can be continued to obtain the general formula for the ith search
direction as

Si= −∇fi+βiSi− 1 (6.78)

where

βi=

∇fiT∇fi
∇fiT− 1 ∇fi− 1

(6.79)

Thus the Fletcher–Reeves algorithm can be stated as follows.
Free download pdf