708 Modern Methods of Optimization
Step 3: Generate a new design point in the vicinity of the current design point{ X 2 =
1 2. 7
5. 84
}
.For this, we choose the range of each design variable as±6 about
its current value so that the ranges are given by (− 6 + 1. 72 , 6 + 1. 72 )=
(− 4. 28 , 7. 72 )forx 1 and(− 6 + 5. 84 , 6 + 5. 84 )=(− 0. 16 , 11. 84 )forx 2. By
selecting two uniformly distributed random numbers in the range (0, 1) as
u 1 = 0. 9 2 andu 2 = 0. 7 3, the corresponding uniformly distributed random
numbers in the ranges ofx 1 andx 2 become
r 1 = − 4. 28 +u 1 { 27. 7 −(− 4. 28 )} = − 4. 28 + 0. 92 ( 12 )= 6. 76
r 2 = − 0. 16 +u 2 { 11. 84 −(− 0. 16 )} = − 0. 16 + 0. 73 ( 12 )= 8. 60
which gives X 3 =
{r 1
r 2
}
=
{ 6 6. 7
8. 60
}
with a function value of f 3 = 133 .3264.
We note that the function valuef 3 is better thanf 2 with f=f 3 −f 2 =
313. 3264 − 387. 7312 = − 74 .4048.
Step 4: Sincef <0, we accept the current point asX 3 and increase the iteration
number toi=3. Sincei>n, we go to step 5.
Step 5: Since a cycle of iterations with the current value of temperature is completed,
we reduce the temperature to a new value ofT= 0 .5 (384.25)=192.125.
Reset the current iteration number asi=1 and go to step 3.
Step 3: Generate a new design point in the vicinity of the current design pointX 3 and
continue the procedure until the temperature is reduced to a small value (until
convergence).
13.4 Particle Swarm Optimization
13.4.1 Introduction
Particle swarm optimization, abbreviated as PSO, is based on the behavior of a colony
or swarm of insects, such as ants, termites, bees, and wasps; a flock of birds; or a
school of fish. The particle swarm optimization algorithm mimics the behavior of these
social organisms. The wordparticledenotes, for example, abee in a colony or a
birdin a flock. Each individual or particle in a swarm behaves in a distributed way
using its own intelligence and the collective or group intelligence of the swarm. As
such, if one particle discovers a good path tofood,the rest of the swarm will also be
able to follow the good path instantly even if their location is far away in the swarm.
Optimization methods based on swarm intelligence are called behaviorally inspired
algorithms as opposed to the genetic algorithms, which are called evolution-based
procedures. The PSO algorithm was originally proposed by Kennedy and Eberhart in
1995 [13.34].
In the context of multivariable optimization, the swarm is assumed to be of specified
or fixed size with each particle located initially at random locations in the multidimen-
sional design space. Each particle is assumed to have two characteristics: aposition
and avelocity.Each particle wanders around in the design space and remembers the
best position (in terms of the food source or objective function value) it has discov-
ered. The particles communicate information or good positions to each other and adjust
their individual positions and velocities based on the information received on the good
positions.