A First Course in FUZZY and NEURAL CONTROL

(singke) #1
5.6. EXAMPLE 1: TRAINING A NEURAL NETWORK 183


  1. Propagatexqforward from the input layer to the output layer using


oi=fi

√ p
X

k=0

wikok

!


  1. Calculate the errorEq(w)on the output layer using


Eq(w)=

1

2

Xp

i=1

(oqi−yiq)^2


  1. Calculate theδvalues of the output layer using


δqi=fi^0

√n
X

k=1

vikzk

!

(oqi−yqi)


  1. Calculate theδ values of the hidden layer by propagating theδvalues
    backward, that is,


δqi=fi^0

√n
X

k=0

wikxqk

! p
X

j=1

vijδqj


  1. Use
    4 qwik=−ηδqioqk
    for allwikof the neural network.


8.q−→q+1and go to step 2.

Both batch and incremental approaches can be applied for updating the
weight configurationw.Therelativeeffectiveness of the two approaches depends
on the problem, but the incremental approach seems superior in most cases,
especially for very regular or redundant training sets.
The algorithm leads tow∗,suchthat


4 (E)(w∗)=0

which could be a local minimum.


5.6 Example1:traininganeuralnetwork ...............


We want to train the two-layer perceptron network shown in Figure 5.9 to
respond with a desired outputd=0. 9 at the outputy 1 to the augmented input
vectorx=[1,x 1 ,x 2 ]T=[1, 1 ,3]T. The network weights have been initialized as
shown. Assuming sigmoidal activation functions at the outputs of the hidden
and output nodes and learning gains ofη=0. 1 and no momentum term, we

Free download pdf