Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

98 Classical Optimization Techniques


2.5.1 Kuhn–Tucker Conditions


As shown above, the conditions to be satisfied at a constrained minimum point,X∗, of
the problem stated in Eq. (2.58) can be expressed as
∂f
∂xi

+


j∈J 1

λj

∂gj
∂xi

= 0 , i= 1 , 2 ,... , n (2.73)

λj> 0 , j∈J 1 (2.74)

These are calledKuhn–Tucker conditionsafter the mathematicians who derived them
as the necessary conditions to be satisfied at a relative minimum off(X) [2.8]. These
conditions are, in general, not sufficient to ensure a relative minimum. However, there is
a class of problems, calledconvex programming problems,†for which the Kuhn–Tucker
conditions are necessary and sufficient for a global minimum.
If the set of active constraints is not known, the Kuhn–Tucker conditions can be
stated as follows:
∂f
∂xi

+

∑m

j= 1

λj

∂gj
∂xi

= 0 , i= 1 , 2 ,... , n

λjgj= 0 ,‡ j = 1 , 2 ,... , m
gj≤ 0 , j= 1 , 2 ,... , m

λj≥ 0 , j= 1 , 2 ,... , m

(2.75)

Note that if the problem is one of maximization or if the constraints are of the type
gj≥ , the 0 λjhave to be nonpositive in Eqs. (2.75). On the other hand, if theproblem is
one of maximization with constraints in the formgj≥ , the 0 λjhave to be nonnegative
in Eqs. (2.75).

2.5.2 Constraint Qualification


When the optimization problem is stated as

Minimizef (X)

subject to
gj(X)≤ 0 , j= 1 , 2 ,... , m

hk(X)= 0 k= 1 , 2 ,... , p

(2.76)

the Kuhn–Tucker conditions become

∇f+

∑m

j= 1

λj∇gj−

∑p

k= 1

βk∇hk= 0

λjgj= 0 , j= 1 , 2 ,... , m

†See Sections 2.6 and 7.14 for a detailed discussion of convex programming problems.
‡This condition is the same as Eq. (2.64).
Free download pdf