Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
2.5 Multivariable Optimization with Inequality Constraints 99

gj≤ 0 , j= 1 , 2 ,... , m
hk= 0 , k= 1 , 2 ,... , p

λj≥ 0 , j= 1 , 2 ,... , m

(2.77)

where λj andβk denote the Lagrange multipliers associated with the constraints
gj≤ 0 and hk= , respectively. Although we found qualitatively that the 0
Kuhn–Tucker conditions represent the necessary conditions of optimality, the
following theorem gives the precise conditions of optimality.


Theorem 2.7 LetX∗be a feasible solution to the problem of Eqs. (2.76). If∇gj(X∗),
j∈J 1 and∇hk(X∗) ,k= 1 , 2 ,... , p, are linearly independent, there existλ∗andβ∗
such that(X∗,λ∗,β∗) atisfy Eqs. (2.77).s


Proof: See Ref. [2.11].
The requirement that∇gj(X∗),j∈J 1 and∇hk(X∗) ,k= 1 , 2 ,... , p, be linearly
independent is called theconstraint qualification. If the constraint qualification is vio-
lated at the optimum point, Eqs. (2.77) may or may not have a solution. It is difficult
to verify the constraint qualification without knowingX∗beforehand. However, the
constraint qualification is always satisfied for problems having any of the following
characteristics:


1.All the inequality and equality constraint functions are linear.
2.All the inequality constraint functions are convex, all the equality constraint
functions are linear, and at least one feasible vectorX ̃ exists that lies strictly
inside the feasible region, so that

gj( X ̃)< 0 , j= 1 , 2 ,... , m and hk (X ̃)= 0 ,k= 1 , 2 ,... , p

Example 2.13 Consider the following problem:


Minixizef (x 1 , x 2 ) =(x 1 − 1 )^2 +x 22 (E 1 )

subjectto


g 1 (x 1 , x 2 )=x^31 − 2 x 2 ≤ 0 (E 2 )

g 2 (x 1 , x 2 )=x^31 + 2 x 2 ≤ 0 (E 3 )

Determine whether the constraint qualification and the Kuhn–Tucker conditions are
satisfied at the optimum point.


SOLUTION The feasible region and the contours of the objective function are shown
in Fig. 2.9. It can be seen that the optimum solution is (0, 0). Sinceg 1 andg 2 are both
activeat the optimum point (0, 0), their gradients can be computed as


∇g 1 (X∗)=

{

3 x 12
− 2

}

( 0 , 0 )

=

{

0

− 2

}

and ∇g 2 (X∗)=

{

3 x 12
2

}

( 0 , 0 )

=

{

0

2

}
Free download pdf