412 Nonlinear Programming III: Constrained Optimization Techniques
Step 7: We obtain the new pointX 2 as
X 2 =X 1 +λ 1 S 1 =
{
1. 0
1. 0
}
+ 0. 2425
{
− 0. 9701
0. 2425
}
=
{
0. 7647
1. 0588
}
Sinceλ 1 =λ∗ 1 andλ∗ 1 < λM, no new constraint has become active atX 2 and
hencethe matrixN 1 remains unaltered.
Iterationi= 2
Step 3: Sinceg 1 (X 2 ) = 0 , we setp=1,j 1 = and go to step 4. 1
Step 4:
N 1 =
[
1
4
]
P 2 =
1
17
[
16 − 4
−4 1
]
f (X 2 )=
{
2 x 1 − 2
2 x 2 − 4
}
X 2
=
{
1. 5294 − 2. 0
2. 1176 − 4. 0
}
=
{
− 0. 4706
− 1. 8824
}
S 2 = −P 2 ∇ f(X 2 )=
1
17
[
16 − 4
−4 1
]{
0. 4706
1. 8824
}
=
{
0. 0
0. 0
}
Step 5: SinceS 2 = , we compute the vector 0 λatX 2 as
λ=−(NT 1 N 1 )−^1 NT 1 ∇ f(X 2 )
=−
1
17
[1 4]
{
− 0. 4706
− 1. 8824
}
= 0. 4707 > 0
The nonnegative value ofλindicates that we have reached the optimum point
and hence that
Xopt=X 2 =
{
0. 7647
1. 0588
}
withfopt= − 4. 059
7.9 Generalized Reduced Gradient Method
Thegeneralized reduced gradient(GRG)methodis an extension of the reduced gradi-
ent method that was presented originally for solving problems with linear constraints
only [7.11]. To see the details of the GRG method, consider the nonlinear programming
problem:
Minimizef (X) (7.79)
subject to
hj( X)≤ 0 , j= 1 , 2 ,... , m (7.80)
lk( X)= 0 , k= 1 , 2 ,... , l (7.81)
xi(l)≤xi≤x(u)i , i= 1 , 2 ,... , n (7.82)