Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
7.10 Sequential Quadratic Programming 423

The extension to include inequality constraints will be considered at a later stage. The
Lagrange function,L(X,λ), corresponding to the problem of Eq. (7.117) is given by


L=f (X)+

∑p

k= 1

λkhk(X) (7.118)

whereλkis the Lagrange multiplier for thekth equality constraint. The Kuhn–Tucker
necessary conditions can be stated as


∇L= 0 or ∇f+

∑p

k= 1

λk∇hk= 0 or ∇f+[A]Tλ= 0 (7.119)

hk( X)= 0 , k= 1 , 2 ,... , p (7.120)

where [A] is ann×pmatrix whosekth column denotes the gradient of the function
hk. Equations (7.119) and (7.120) represent a set ofn+pnonlinear equations in
n+punknowns (xi, i = 1 ,... , nandλk, k = 1 ,... , p). These nonlinear equations
can be solved using Newton’s method. For convenience, we rewrite Eqs. (7.119) and
(7.120) as


F(Y)= 0 (7.121)

where


F=

{

∇L

h

}

(n +p)× 1

, Y=

{

X

λ

}

(n +p)× 1

, 0 =

{

0

0

}

(n +p)× 1

(7.122)

According to Newton’s method, the solution of Eqs. (7.121) can be found iteratively
as (see Section 6.11)


Yj+ 1 =Yj+ Yj (7.123)

with


[∇F]Tj Yj= −F(Yj) (7.124)

whereYj is the solution at the start ofjth iteration and Yjis the change inYj
necessary to generate the improved solution,Yj+ 1 , and [∇F]j= [∇F(Yj) is the] (n+
p)×(n+p)Jacobian matrix of the nonlinear equations whoseith column denotes the
gradient of the functionFi( Y)with respect to the vectorY. By substituting Eqs. (7.121)
and (7.122) into Eq. (7.124), we obtain
[
[∇^2 L [] H]
[H]T [0]


]

j

{

X

λ

}

j

= −

{

∇L

h

}

j

(7.125)

Xj=Xj+ 1 −Xj (7.126)

λj=λj+ 1 −λj (7.127)
Free download pdf