A First Course in FUZZY and NEURAL CONTROL

(singke) #1
5.4. THE DELTA RULE 175

defined by
GF(k)=max{|FA|:A⊂Rn,|A|=k}


Note that for allk∈N,
GF(k)≤ 2 k


TheVapnik-Chervonenkis dimension(VC dimension) ofF is the size of
the largest shatteredfinite subset ofRn, or equivalently, the largest value ofk
for whichGF(k)=2k.
In the case of the perceptron, we have


GF(k)=2

Xn

k=0

μ
m− 1
k


=


2 m forn≥m− 1
2 m− 2

Pm− 1
k=n+1

°m− 1
k

¢

forn<m− 1

so that the VC dimension ofF,denotedbyD(F),isn+1.
For general neural networks with real-valued outputs, the class ofFof com-
putable functions is a class of functions. The concept of VC dimension of a class
of sets can be extended to the case of a class of functions by using subgraphs.
Thesubgraphof a functionfis the subset ofRn◊Rdefined by


S(f)={(x,t):x∈Rn,t∈R,t≤f(x)}

ThenD(F)is defined as the VC dimension of the class of its subgraphs


{S(f):f∈F}

Essentially, neural networks withfinite VC dimensions are trainable.


5.4 Thedeltarule.............................


The delta rule is a learning algorithm forsingle-layer neural networks.We
choose to present this learning algorithm [81] in some detail since it is a precursor
of the backpropagation algorithm for multi-layer neural networks.
Theideaistodefine a measure of the overall performance of a network, such
astheoneshowninFigure5.5,thenfind a way to optimize that performance.
Obviously, learning algorithms should change the weights so that outputoq
becomes more and more similar to the target outputyqfor allq=1, 2 ,...,m,
when presenting the inputxqto the network.
A suitable overall performance measure is


E=

XN

q=1

Eq

where


Eq=

1

2

Xm

i=1

(yqi−oqi)^2
Free download pdf