A First Course in FUZZY and NEURAL CONTROL

(singke) #1
182 CHAPTER 5. NEURAL NETWORKS FOR CONTROL

wherejis in the output layer. The∂E


q
∂oqj are known from previous calculations
using the delta rule. Next,


oqi=fj^0

√m
X

`=1

vi`zq`

!

vji

Thus theδqi, for hidden neuronsi, are computed from the already-known values
ofδqjfor alljin the output layer.
Because of this fact, this generalized delta rule is called the backpropagation
algorithm. First feed the input patternsxqforward to reach the output layer
andthencalculatetheδqjís for all output neuronsj. Next propagate theseδqjís
backward to the layer below (here, the hidden layer) in order to calculate the
δqiís for all neuronsiof that layer.
There are two strategies for training.


1.The batch approach:Theweightswikare changed according to

4 wik=−η

XN

q=1

∂Eq
∂wik

after allNtraining patterns are presented to the neural network.

2.The incremental approach: Change the weightswikafter every train-
ing patternqis presented to the neural network, that is, using

4 qwik=−η
∂Eq
∂wik

to updatewik.

Note that the batch update 4 wikis just the sum of the incremental updates
4 qwik,q=1, 2 ,...,N.
In the backpropagation algorithm, we start out by the calculation of theδ
value in the output layer. Then we propagate the error vector backward from
the output layer towards the input terminals. When all the weights are updated
with 4 qwik, the next training patternxqis presented, and the procedure starts
again. Thestopping criterioncan be either a threshold or the error function
E.
In summary, the backpropagation algorithm can be described by the follow-
ing steps.



  1. Initialize weightswikwith small random values, and select the learning
    rateη.

  2. Apply the patternxqto the input layer.

Free download pdf