398 Nonlinear Programming III: Constrained Optimization Techniques
t 1
∂f
∂x 1
+t 2
∂f
∂x 2
+ · · · +tn
∂f
∂xn
+α+yp+ 1 =
∑n
i= 1
∂f
∂xi
t 1 +yp+ 2 = 2
t 2 +yp+ 3 = 2
tn+yp+n+ 1 = 2
t 1 ≥ 0
t 2 ≥ 0
tn≥ 0
α≥ 0
wherey 1 , y 2 ,... , yp+n+ 1 are the nonnegative slack variables. The simplex method
discussed in Chapter 3 can be used to solve the direction-finding problem stated in
Eqs. (7.40). This problem can also be solved by more sophisticated methods that treat
the upper bounds onti in a special manner instead of treating them as constraints
[7.6]. If the solution of the direction-finding problem gives a value ofα∗> 0 ,f (X)
can be improved by moving along the usable feasible direction
S=
s 1
s 2
sn
=
t 1 ∗− 1
t 2 ∗− 1
tn∗− 1
If, however,α∗= , it can be shown that the Kuhn–Tucker optimality conditions are 0
satisfied atXiand hence pointXican be taken as the optimal solution.
7.7.2 Determination of Step Length
After finding a usable feasible directionSiat any pointXi, we have to determine a
suitable step lengthλito obtain the next pointXi+ 1 as
Xi+ 1 =Xi+λiSi (7.41)
There are several ways of computing the step length. One of the methods is to determine
an optimal step length(λi) hat minimizest f (Xi+λSi) uch that the new points Xi+ 1
given by Eq. (7.41) lies in the feasible region. Another metho d is to choose the step
length(λi) y trial and error so that it satisfies the relationsb
f (Xi+λiSi) ≤f(Xi)
gj(Xi+λiSi) ≤ 0 , j= 1 , 2 ,... , m