Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

326 Nonlinear Programming II: Unconstrained Optimization Techniques


Sn,S(p^1 ),S(p^2 ),... areA-conjugate. Since, by Theorem 6.2, any search method involv-
ing minimization along a set of conjugate directions is quadratically convergent,
Powell’s method is quadratically convergent. From the method used for construct-
ing the conjugate directionsS(p^1 ),S(p^2 ),... ,we find thatnminimization cycles are
required to complete the construction ofnconjugate directions. In theith cycle,
the minimization is done along the already constructedi conjugate directions and
then−inonconjugate (coordinate) directions. Thus afterncycles, all thensearch
directions are mutually conjugate and a quadratic will theoretically be minimized in
n^2 one-dimensional minimizations. This proves thequadratic convergenceof Powell’s
method.
It is to be noted that as with most of the numerical techniques, the convergence in
many practical problems may not be as good as the theory seems to indicate. Powell’s
method may require a lot more iterations to minimize a function than the theoretically
estimated number. There are several reasons for this:
1.Since the number of cyclesnis valid only for quadratic functions, it will take
generally greater thanncycles for nonquadratic functions.
2.The proof ofquadratic convergencehas been established with the assumption
that the exact minimum is found in each of the one-dimensional minimizations.
However, the actual minimizing step lengthsλ∗i will be only approximate, and
hence the subsequent directions will not be conjugate. Thus the method requires
more number of iterations for achieving the overall convergence.
3.Powell’s method, described above, can break down before the minimum point
is found. This is because the search directionsSimight become dependent or
almostdependent during numerical computation.

Convergence Criterion. The convergence criterion one would generally adopt in a
method such as Powell’s method is to stop the procedure whenever a minimization
cycle produces a change in all variables less than one-tenth of the required accuracy.
However, a more elaborate convergence criterion, which is more likely to prevent
premature termination of the process, was given by Powell [6.7].

Example 6.6 Minimizef (x 1 , x 2 )=x 1 −x 2 + 2 x^21 + 2 x 1 x 2 +x^22 from the starting
pointX 1 =

{ 0

0

}

using Powell’s method.

SOLUTION
Cycle 1: Univariate Search
We minimizef alongS 2 =Sn=

{ 0

1

}

fromX 1. To find the correct direction (+S 2
or−S 2 ) for decreasing the value off,we take the probe length asε= 0 .01. As
f 1 = f(X 1 ) = 0. 0 , and

f+= f(X 1 +εS 2 ) =f( 0. 0 , 0. 01 )= − 0. 0099 < f 1

f ecreases along the directiond +S 2. To find the minimizing step lengthλ∗alongS 2 ,
we minimize

f (X 1 +λS 2 ) =f( 0. 0 , λ)=λ^2 −λ
Free download pdf