728 Modern Methods of Optimization
processing capability. The neural computing strategies have been adopted to solve
optimization problems in recent years [13.23, 13.24]. Aneural networkis a massively
parallel network of interconnected simple processors (neurons) in which each neuron
accepts a set of inputs from other neurons and computes an output that is propagated
to the output nodes. Thus a neural network can be described in terms of the individual
neurons, the network connectivity, the weights associated with the interconnections
between neurons, and the activation function of each neuron. The network maps an
input vector from one space to another. The mapping is not specified but is learned.
Consider a single neuron as shown in Fig. 13.10. The neuron receives a set of
ninputs,xi, i = 1 , 2 ,... , n, from its neighboring neurons and a bias whose value
is equal to 1. Each input has a weight (gain)wi associated with it. The weighted
s∑um of the inputs determines the state or activity of a neuron, and is given bya=
n+ 1
i= 1 wixi=W
TX where, X= {x 1 x 2 · · ·xn 1 }T. A simple function is now used to
provide a mapping from then-dimensional space of inputs into a one-dimensional
space of the output, which the neuron sends to its neighbors. The output of a neuron is
a function of its state and can be denoted asf (a). Usually, no output will be produced
unless the activation level of the node exceeds a threshold value. The output of a neuron
is commonly described by a sigmoid function as
f (a)=
1
1 +e−a
(13.60)
whichis shown graphically in Fig. 13.10. The sigmoid function can handle large as
well as small input signals. The slope of the functionf (a)represents the available
gain. Since the output of the neuron depends only on its inputs and the threshold value,
each neuron can be considered as a separate processor operating in parallel with other
neurons. The learning process consists of determining values for the weightswithat
leadto an optimal association of the inputs and outputs of the neural network.
0
1.0
0.5
f(a)
f(a)
(Bias)
xn+1 = 1 wn+1
wn
w 1
xn
x 1
a
a
...
Figure 13.10 Single neuron and its output. [12.23], reprinted with permission of Gordon &
Breach Science Publishers.