Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1

2.5 Multivariable Optimization with Inequality Constraints viii Contents


where


f



1

0

− 2


=e−^2

df



1

0

− 2


=h 1 ∂f
∂x 1



1

0

− 2


+h 2 ∂f
∂x 2



1

0

− 2


+h 3 ∂f
∂x 3



1

0

− 2



=[h 1 ex^3 +h 2 ( 2 x 2 x 3 )+h 3 x 22 +h 3 x 1 ex^3 ]



1

0

− 2


=h 1 e−^2 +h 3 e−^2

d^2 f



1

0

− 2


=

∑^3

i= 1

∑^3

j= 1

hihj

∂^2 f
∂xi∂xj



1

0

− 2


=

(

h^21

∂^2 f
∂x 12

+h^22

∂^2 f
∂x^22

+h^23

∂^2 f
∂x 32

+ 2 h 1 h 2

∂^2 f
∂x 1 ∂x 2

+ 2 h 2 h 3

∂^2 f
∂x 2 ∂x 3

+ 2 h 1 h 3

∂^2 f
∂x 1 ∂x 3

)



1

0

− 2



=[h^21 ( 0 )+h^22 ( 2 x 3 )+h^23 (x 1 ex^3 )+ 2 h 1 h 2 ( 0 )+ 2 h 2 h 3 ( 2 x 2 )

+ 2 h 1 h 3 (ex^3 )]



1

0

− 2


=− 4 h^22 +e−^2 h^23 + 2 h 1 h 3 e−^2

Thus the Taylor’s series approximation is given by


f(X)≃e−^2 +e−^2 (h 1 +h 3 )+

1

2!

(− 4 h^22 +e−^2 h^23 + 2 h 1 h 3 e−^2 )

whereh 1 =x 1 − , 1 h 2 =x 2 , andh 3 =x 3 +. 2


Theorem 2.3 Necessary Condition Iff(X) has an extreme point (maximum or min-
imum) atX=X∗and if the first partial derivatives off(X)exist atX∗, then


∂f
∂x 1

(X∗)=

∂f
∂x 2

(X∗) =· · · =

∂f
∂xn

(X∗)= 0 (2.9)

Proof: The proof given for Theorem 2.1 can easily be extended to prove the present
theorem. However, we present a different approach to prove this theorem. Suppose that
one of the first partial derivatives, say thekth one, does not vanish atX∗. Then, by
Taylor’s theorem,


f (X∗+ h)=f(X∗)+

∑n

i= 1

hi

∂f
∂xi

(X∗)+R 1 (X∗,h)
Free download pdf