Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

404 Nonlinear Programming III: Constrained Optimization Techniques


7.8 Rosen’s Gradient Projection Method


The gradient projection method of Rosen [7.9, 7.10] does not require the solution of an
auxiliary linear optimization problem to find the usable feasible direction. It uses the
projection of the negative of the objective function gradient onto the constraints that
are currently active. Although the method has been described by Rosen for a general
nonlinear programming problem, its effectiveness is confined primarily to problems in
which the constraints are all linear. Consider a problem with linear constraints:

Minimizef (X)

subject to

gj(X)=

∑n

i= 1

aijxi−bj≤ 0 , j= 1 , 2 ,... , m (7.50)

Let the indices of the active constraints at any point bej 1 , j 2 ,... , jp. The gradients of
the active constraints are given by

∇gj(X)=










a 1 j
a 2 j
..
.
anj










, j=j 1 , j 2 ,... , jp (7.51)

By defining a matrixNof ordern×pas

N=[∇gj 1 ∇gj 2.. .∇gjp] (7.52)

the direction-finding problem for obtaining a usable feasible directionScan be posed
as follows.

FindSwhich minimizesST∇ f(X) (7.53)

subject to
NTS= 0 (7.54)

STS− 1 = 0 (7.55)

where Eq. (7.55) denotes the normalization of the vector S. To solve this
equality-constrained problem, we construct the Lagrangian function as

L(S,λ, β)=ST∇ f(X)+λTNTS +β(STS− 1 ) (7.56)

where

λ=










λ 1
λ 2
..
.
λp









Free download pdf