Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

494 Geometric Programming


By multiplying Eq. (8.4) byxk, we can rewrite it as

xk

∂f
∂xk

=

∑N

j= 1

akj(cj x
a 1 j
1 x

a 2 j
2 · · ·x

ak − 1 ,j
k− 1 x

akj
k x

ak + 1 ,j
k+ 1 · · ·x

anj
n )

=

∑N

j= 1

akjUj(X)= 0 , k= 1 , 2 ,... , n (8.5)

To find the minimizing vector

X∗=










x 1 ∗
x 2 ∗
..
.
xn∗










we have to solve thenequations given by Eqs. (8.4), simultaneously. To ensure that the
pointX∗corresponds to the minimum off (but not to the maximum or the stationary
point ofX), the sufficiency condition must be satisfied. This condition states that the
Hessian matrix off is evaluated atX∗:

JX∗=

[

∂^2 f
∂xk∂xl

]

X∗
must be positive definite. We will see this condition at a latter stage. Since the vector
X∗satisfies Eqs. (8.5), we have

∑N

j= 1

akjUj(X∗) = 0 , k= 1 , 2 ,... , n (8.6)

After dividing by the minimum value of the objective functionf∗, Eq. (8.6) becomes

∑N

j= 1

∗jakj= 0 , k= 1 , 2 ,... , n (8.7)

where the quantities∗jare defined as

∗j=

Uj(X∗)
f∗

=

Uj∗
f∗

(8.8)

and denote the relative contribution ofjth term to the optimal objective function. From
Eq. (8.8), we obtain

∑N

j= 1

∗j=∗ 1 +∗ 2 + · · · +∗N

=

1

f∗

(U 1 ∗+U 2 ∗+ · · · +UN∗)= 1 (8.9)
Free download pdf