Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

442 Nonlinear Programming III: Constrained Optimization Techniques


Canceling the common terms from both sides, we can write the inequality (7.193) as

f (X∗k+ 1 )

(

1

rk+ 1


1

rk

)

<f (X∗k)

(

1

rk+ 1


1

rk

)

(7.194)

since
1
rk+ 1


1

rk

=

rk−rk+ 1
rkrk+ 1

> 0 (7.195)

we obtain

f (X∗k+ 1 ) < f (X∗k) (7.196)

7.14 Convex Programming Problem


In Section 7.13 we saw that the sequential minimization of

φ(X, rk) =f(X)−rk

∑m

j= 1

1

gj(X)

, rk> 0 (7.197)

for a decreasing sequence of values ofrkgives the minimaX∗k. As k→∞, these points
X∗kconverge to the minimum of the constrained problem:

Minimizef (X)
subject to (7.198)
gj( X)≤ 0 , j= 1 , 2 ,... , m

To ensure the existence of a global minimum ofφ(X, rk) or every positive valuef
ofrk, φhas to be strictly convex function ofX. The following theorem gives the
sufficient conditions for theφfunction to be strictly convex. Ifφis convex, for every
rk> 0 there exists a unique minimum ofφ(X, rk).

Theorem 7.2 Iff (X)andgj( X)are convex and at least one off (X)andgj( X)is
strictly convex, the functionφ(X, rk) definedby Eq. (7.197) will be a strictly convex
function ofX.

Proof: This theorem can be proved in two steps. In the first step we prove that if a
functiongk( X)is convex, 1/gk( X)will be concave. In the second step, we prove that
a positive combination of convex functions is convex, and strictly convex if at least
one of the functions is strictly convex.

Thus Theorem A.3 of Appendix A guarantees that the sequential minimization of
φ(X, rk) or a decreasing sequence of values off rkleads to the global minimum of the
original constrained problem. When the convexity conditions are not satisfied, or when
the functions are so complex that we do not know beforehand whether the convexity
conditions are satisfied, it will not be possible to prove that the minimum found by the
Free download pdf