7.9 Generalized Reduced Gradient Method 415
dY=
dy 1
dy 2
..
.
dyn−l
(7.100)
dZ=
dz 1
dz 2
..
.
dzm+l
(7.101)
Assuming that the constraints are originally satisfied at the vectorX,(g(X)= 0 ), any
change in the vectordXmust correspond todg= 0 to maintain feasibility atX+dX.
Equation (7.95) can be solved to expressdZas
dZ= −[D]−^1 [C]dY (7.102)
The change in the objective function due to the change inXis given by Eq. (7.94),
which can be expressed, using Eq. (7.102), as
df (X)=(∇YTf −∇ZTf[D]−^1 [ C])dY (7.103)
or
df
dY
(X)=GR (7.104)
where
GR= ∇Yf−([D]−^1 [C])T∇Zf (7.105)
is called the generalized reduced gradient. Geometrically, the reduced gradient
can be described as a projection of the originaln-dimensional gradient onto the
(n−m)-dimensional feasible region described by the design variables.
We know that a necessary condition for the existence of a minimum of an uncon-
strained function is that the components of the gradient vanish. Similarly, a constrained
function assumes its minimum value when the appropriate components of the reduced
gradient are zero. This condition can be verified to be same as the Kuhn–Tucker con-
ditions to be satisfied at a relative minimum. In fact, the reduced gradientGRcan be
used to generate a search directionSto reduce the value of the constrained objective
function similar to the gradient∇f that can be used to generate a search directionS
for an unconstrained function. A suitable step lengthλis to be chosen to minimize
the value offalong the search directionS. For any specific value ofλ, the dependent
variable vectorZis updated using Eq. (7.102). Noting that Eq. (7.102) is based on
using a linear approximation to the original nonlinear problem, we find that the con-
straints may not be exactly equal to zero atλ, that is,dg= 0. Hence whenYis held