Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

618 Integer Programming


adjacent lower value—for simplifying the computations. UsingX^0 =

{ 1. 2

1. 1

}

,we have

f (X^0 ) = 6. 51 , g(X^0 ) =− 2. 26

∇f (X^0 )=

{

4 x 1
6 x 2

}

X^0

=

{

4. 8

6. 6

}

, ∇g(X^0 )=











1

x 12


1

x 22










X^0

=

{

− 0. 69

− 0. 83

}

Now

x 1 =y 11 ( 0. 8 )+y 12 ( 1. 2 )+y 13 ( 1. 5 )

x 2 =y 21 ( 0. 8 )+y 22 ( 1. 1 )+y 23 ( 1. 4 )
δx 1 =y 11 ( 0. 8 − 1. 2 )+y 12 ( 1. 2 − 1. 2 )+y 13 ( 1. 5 − 1. 2 )
δx 2 =y 21 ( 0. 8 − 1. 1 )+y 22 ( 1. 1 − 1. 1 )+y 23 ( 1. 4 − 1. 1 )

f≈ 6. 51 + { 4 .8 6. 6 }

{

− 0. 4 y 11 + 0. 3 y 13
− 0. 3 y 21 + 0. 3 y 23

}

g≈− 2. 26 + {− 0. 69 − 0. 83 }

{

− 0. 4 y 11 + 0. 3 y 13
− 0. 3 y 21 + 0. 3 y 23

}

Thus the first approximate problem becomes (in terms of the unknownsy 11 ,y 12 ,y 13 ,
y 21 ,y 22 , andy 23 ):

Minimizef= 6. 51 − 1. 92 y 11 + 1. 44 y 13 − 1. 98 y 21 + 1. 98 y 23

subject to

− 2. 26 + 0. 28 y 11 + 0. 21 y 13 + 0. 25 y 21 − 0. 25 y 23 ≤ 0

y 11 +y 12 +y 13 = 1
y 21 +y 22 +y 23 = 1
yij= or 1 0 , i= 1 , 2 , j= 1 , 2 , 3

In this problem, there are only nine possible solutions and hence they can all be
enumerated and the optimum solution can be found as

y 11 = 1 , y 12 = 0 , y 13 = 0 , y 21 = 1 , y 22 = 0 , y 23 = 0

Thus the solution of the first approximate problem, in terms of original variables, is
given by

x 1 = 0. 8 , x 2 = 0. 8 , f (X)= 2. 61 , and g(X)= − 1. 5

This point can be used to generate a second approximate problem and the process can
be repeated until the final optimum solution is found.
Free download pdf