7.21 Checking the Convergence of Constrained Optimization Problems 465
iterative process and using the solution with confidence. In addition to the convergence
criteria discussed earlier, the following two methods can also be used to test the point
for optimality.
7.21.1 Perturbing the Design Vector
Since the optimum point
X∗=
x∗ 1
x∗ 2
x∗n
corresponds to the minimum function value subject to the satisfaction of the constraints
gj(X∗) ≤ 0 ,j= 1 , 2 ,... , m(the equality constraints can also be included, if neces-
sary), we perturbX∗by changing each of the design variables, one at a time, by a
small amount, and evaluate the values off andgj, j= 1 , 2 ,... , m. Thus if
X+i =X∗+ Xi
X−i =X∗− Xi
where
Xi=
0
0
xi
0
0
←ith row
xiis a small perturbation inxithat can be taken as 0.1 to 2.0 % ofxi∗. Evaluate
f(X+i) ; f(X−i); gj(X+i)
gj(X−i), j= 1 , 2 ,... , m for i= 1 , 2 ,... , n
If
f (X+i) ≥f(X∗); gj(X+i) ≤ 0 , j= 1 , 2 ,... , m
f (X−i) ≥f(X∗); gj(X−i) ≤ 0 , j= 1 , 2 ,... , m
fori= 1 , 2 ,... , n,X∗can be taken as the constrained optimum point of the original
problem.
7.21.2 Testing the Kuhn–Tucker Conditions
Since the Kuhn–Tucker conditions, Eqs. (2.73) and (2.74), are necessarily to be sat-
isfied†by the optimum point of any nonlinear programming problem, wecan at least
†These may not be sufficient to guarantee a global minimum point for nonconvex programming problems.