Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
6.11 Newton’s Method 345

Thus the optimum point is reached in two iterations. Even if we do not know this point
to be optimum, we will not be able to move from this point in the next iteration. This
can be verified as follows.

Iteration 3
Now

∇f 3 = ∇ f(X 3 )=

{

0

0

}

, |∇f 2 |^2 = 2 , and |∇f 3 |^2 = 0.

Thus

S 3 = −∇f 3 + (|∇f 3 |^2 /| ∇f 2 |^2 )S 2 = −

{

0

0

}

+

(

0

2

){

0

0

}

=

{

0

0

}

This shows that there is no search direction to reducef further, and henceX 3 is
optimum.

6.11 Newton’s Method


Newton’s method presented in Section 5.12.1 can be extended for the minimization of
multivariable functions. For this, consider the quadratic approximation of the function
f (X)atX=Xiusing the Taylor’s series expansion

f(X)=f (Xi) +∇fiT(X−Xi)+^12 (X−Xi)T[Ji](X−Xi) (6.83)

where [Ji]=[J]|Xiis the matrix of second partial derivatives (Hessian matrix)off
evaluated at the pointXi. By setting the partial derivatives of Eq. (6.83) equal to zero
for the minimum off (X), we obtain
∂f (X)
∂xj

= 0 , j= 1 , 2 ,... , n (6.84)

Equations (6.84) and (6.83) give

∇f= ∇fi+[Ji](X−Xi)= 0 (6.85)

If [Ji] is nonsingular, Eqs. (6.85) can be solved to obtain an improv ed approximation
(X=Xi+ 1 ) sa

Xi+ 1 =Xi−[Ji]−^1 ∇fi (6.86)

Since higher-order terms have been neglected in Eq. (6.83), Eq. (6.86) is to be used
iteratively to find the optimum solutionX∗.
Thesequence of pointsX 1 ,X 2 ,... ,Xi+ 1 can be shown to converge to the actual
solutionX∗from any initial pointX 1 sufficiently close to the solutionX∗, provided
that [J 1 ] is nonsingular. It can be seen that Newton’s method uses the second partial
derivatives of the objective function (in the form of the matrix [Ji]) and hence is a
second-ordermethod.
Free download pdf