Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
6.16 Test Functions 363

Iteration 2 (i= 2 )
The next search direction is determined as

S 2 = −[B 2 ]∇f 2 = −

[ 1

2 −

1
2
−^1252

]{

− 1

− 1

}

=

{

0

2

}

To find the minimizing step lengthλ∗ 2 alongS 2 , we minimize

f(X 2 +λ 2 S 2 )=f

({

− 1

1

}

+λ 2

{

0

2

)}

=f (− 1 , 1 + 2 λ 2 )= 4 λ^22 − 2 λ 2 − 1

with respect toλ 2. Since df/dλ 2 = at 0 λ∗ 2 =^14 , we obtain

X 3 =X 2 +λ∗ 2 S 2 =

{

− 1

1

}

+

1

4

{

0

2

}

=

{

− 1

3
2

}

This point can be identified to be optimum since

∇f 3 =

{

0

0

}

and ||∇f 3 || = 0 <ε

6.16 Test Functions


The efficiency of an optimization algorithm is studied using a set of standard func-
tions. Several functions, involving different number of variables, representing a variety
of complexities have been used as test functions. Almost all the test functions pre-
sented in the literature are nonlinear least squares; that is, each function can be
represented as

f (x 1 , x 2 ,... , xn)=

∑m

i= 1

fi(x 1 , x 2 ,... , xn)^2 (6.139)

wherendenotes the number of variables andmindicates the number of functions(fi)
that define the least-squares problem. The purpose of testing the functions is to show
how well the algorithm works compared to other algorithms. Usually, each test function
is minimized from a standard starting point. The total number of function evaluations
required to find the optimum solution is usually taken as a measure of the efficiency of
the algorithm. References [6.29] to [6.32] present a comparative study of the various
unconstrained optimization techniques. Some of the commonly used test functions are
given below.
1.Rosenbrock’s parabolic valley [6.8]:

f (x 1 , x 2 )= 100 (x 2 −x 12 )^2 +( 1 −x 1 )^2 (6.140)

X 1 =

{

− 1. 2

1. 0

}

, X∗=

{

1

1

}

f 1 = 42. 0 , f∗= 0. 0
Free download pdf