Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
11.4 Stochastic Nonlinear Programming 653

If the standard deviations ofyi,σyi, are small,f(Y)can be approximated by the first
two terms of Eq. (11.85):

f (Y)≃(Y)−

∑N

i= 1

(

∂f
∂yi




∣Y

)

yi+

∑N

i= 1

(

∂f
∂yi




∣Y

)

yi= ψ(Y) (11.86)

If allyi( i= 1 , 2 ,... , N )follow normal distribution,ψ(Y), which is a linear function
ofY, also follows normal distribution. The mean and the variance ofψare given
by

ψ=ψ(Y) (11.87)

Var(ψ)=σψ^2 =

∑N

i= 1

(

∂f
∂yi




∣Y

) 2

σy^2 i (11.88)

since allyiare independent. For the purpose of optimization, a new objective function
F(Y)can be constructed as

F (Y)=k 1 ψ+k 2 σψ (11.89)

wherek 1 ≥ and 0 k 2 ≥ , and their numerical values indicate the relative importance 0
ofψandσψfor minimization. Another way of dealing with the standard deviation of
ψis to minimizeψsubject to the constraintσψ≤k 3 ψ, wherek 3 is a constant, along
with the other constraints.

11.4.2 Constraints


If some parameters are random in nature, the constraints will also be probabilistic and
one would like to have the probability that a given constraint is satisfied to be greater
than a certain value. This is precisely what is stated in Eqs. (11.84) also. The constraint
inequality (11.84) can be written as
∫∞

0

fgj(gj)dgj≥pj (11.90)

wherefgj(gj) s the probability density function of the random variablei gj(a function
of several random variables is also a random variable) whose range is assumed to be
−∞to∞. The constraint functiongj( can be expanded around the vector of meanY)
values of the random variables,Y,as

gj(Y)≃gj(Y)+

∑N

i− 1

(

∂gj
∂yi




∣Y

)

(yi−yi) (11.91)

From this equation, the mean value,gj, and the standard deviation,σgj, ofgjcan be
obtainedas

gj=gj(Y) (11.92)

σgj=

{N


i= 1

(

∂gj
∂yi




∣Y

) 2

σy^2 i

} 1 / 2

(11.93)
Free download pdf