Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
6.9 Steepest Descent (Cauchy) Method 339

wherexjis the jth component ofX. But

∂xj
∂λ

=


∂λ

(xij+ λsij)=sij (6.66)

wherexijandsijare the jth components ofXiandSi, respectively. Hence

df

=

∑n

j= 1

∂f
∂xj

sij= ∇fTSi (6.67)

Ifλ∗minimizesfin the directionSi, we have

df





λ=λ∗

=∇f|Tλ∗Si= 0 (6.68)

at the pointXi+λ∗Si.

6.9 Steepest Descent (Cauchy) Method


The use of the negative of the gradient vector as a direction for minimization was
first made by Cauchy in 1847 [6.12]. In this method we start from an initial trial
pointX 1 and iteratively move along the steepest descent directions until the optimum
point is found. The steepest descent method can be summarized by the following
steps:

1.Start with an arbitrary initial pointX 1. Set the iteration number asi= 1.
2.Find the search directionSias

Si= −∇fi= −∇ f(Xi) (6.69)

3.Determine the optimal step lengthλ∗i in the directionSiand set

Xi+ 1 =Xi+λ∗iSi=Xi−λ∗i∇fi (6.70)

4 .Test the new point,Xi+ 1 , for optimality. IfXi+ 1 is optimum, stop the process.
Otherwise, go to step 5.
5.Set the new iteration numberi=i+1 and go to step 2.
The method of steepest descent may appear to be thebest unconstrained minimization
technique since each one-dimensional search starts in the “best” direction. However,
owing to the fact that the steepest descent direction is a local property, the method is
not really effective in most problems.

Example 6.8 Minimizef (x 1 , x 2 )=x 1 −x 2 + 2 x 12 + 2 x 1 x 2 +x 22 starting from the
pointX 1 =

{ 0

0

}

.
Free download pdf