Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
6.1 Introduction 305

those requiring only first derivatives of the function are calledfirst-order methods; those
requiring both first and second derivatives of the function are termedsecond-order
methods.

6.1.2 General Approach


All the unconstrained minimization methods are iterative in nature and hence they start
from an initial trial solution and proceed toward the minimum point in a sequential
manner as shown in Fig. 5.3. The iterative process is given by

Xi+ 1 =Xi+λ∗iSi (6.4)

whereXiis the starting point,Siis the search direction,λ∗i is the optimal step length,
andXi+ 1 is the final point in iterationi.It is important to note that all the unconstrained
minimization methods (1) require an initial pointX 1 to start the iterative procedure,
and(2) differ from one another only in the method of generating the new pointXi+ 1
(fromXi) and in testing the pointXi+ 1 for optimality.

6.1.3 Rate of Convergence


Different iterative optimization methods have different rates of convergence. In general,
an optimization method is said to have convergence of orderpif [6.2]

||Xi+ 1 −X∗||
||Xi−X∗||p

≤ k, k≥ 0 , p≥ 1 (6.5)

whereXiandXi+ 1 denote the points obtained at the end of iterationsiandi+1,
respectively,X∗represents the optimum point, and||X||denotes the length or norm of
the vectorX:

||X|| =


x 12 +x 22 + · · · +xn^2

If p= 1 and 0≤k≤1, the method is said to belinearly convergent(corresponds
to slow convergence). Ifp=2, the method is said to bequadratically convergent
(corresponds to fast convergence). An optimization method is said to havesuperlinear
convergence(corresponds to fast convergence) if

lim
i →∞

||Xi+ 1 −X∗||
||Xi−X∗||

→ 0 (6.6)

The definitions of rates of convergence given in Eqs. (6.5) and (6.6) are applica-
ble to single-variable as well as multivariable optimization problems. In the case of
single-variable problems, the vector,Xi, for example, degenerates to a scalar,xi.

6.1.4 Scaling of Design Variables


The rate of convergence of most unconstrained minimization methods can be improved
by scaling the design variables. For a quadratic objective function, the scaling of the
Free download pdf