422 Nonlinear Programming III: Constrained Optimization Techniques
Step 2: We compute the GRG at the currentXusing Eq. (7.105). Since
∇Yf=
∂f
∂x 1
∂f
∂x 2
=
{
2 (− 0. 576 + 0. 024 )
− 2 (− 0. 576 + 0. 024 )+ 4 (− 0. 024 − 1. 2477 )^3
}
=
{
− 1. 104
− 7. 1225
}
∇Zf=
{
∂f
∂z 1
}
=
{
∂f
∂x 3
}
={− 4 (− 0. 024 − 1. 2477 )^3 } = { 8. 2265 }
[C]=
[
∂g 1
∂x 1
∂g 1
∂x 2
]
=[( 1 +(− 0. 024 )^2 ) 2 (− 0. 576 )(− 0. 024 )]
=[1.000576 0.027648]
[D]=
[
∂g 1
∂x 3
]
=[ 4 x 33 ] =[ 4 ( 1. 2477 )^3 ] =[ 7 .7694]
[D]−^1 [C]=
1
7. 7694
[1.000576 0.027648]=[0.128784 0.003558]
GR= ∇Yf −[[D]−^1 [ ]C]T∇Zf
=
{
− 1. 104
− 7. 1225
}
−
{
0. 128784
0. 003558
}
( 8. 2265 )=
{
− 2. 1634
− 7. 1518
}
SinceGR= , we need to proceed to the next step. 0
Note:It can be seen that the value of the objective function reduced from an initial
value of 21.16 to 2.9201 in one iteration.
7.10 Sequential Quadratic Programming
The sequential quadratic programming is one of the most recently developed and per-
haps one of the best methods of optimization. The method has a theoretical basis that
is related to (1) the solution of a set of nonlinear equations using Newton’s method,
and (2) the derivation of simultaneous nonlinear equations using Kuhn–Tucker con-
ditions to the Lagrangian of the constrained optimization problem. In this section we
present both the derivation of the equations and the solution procedure of the sequential
quadratic programming approach.
7.10.1 Derivation
Consider a nonlinear optimization problem with only equality constraints:
FindXwhich minimizesf (X)
subject to
hk( X)= 0 , k= 1 , 2 ,... , p (7.117)