A First Course in FUZZY and NEURAL CONTROL

(singke) #1
174 CHAPTER 5. NEURAL NETWORKS FOR CONTROL

requirements simultaneously. However, as with all tools, neural networks have
limitations.
The neural network approach consists of two phases ñ creating a set of
models and providing a mechanism forfinding a good approximation. First, the
architecture of a neural network provides a setFof functions fromXtoY.The
fact thatFis a good set of models forφis due to theuniversal approximation
propertyof neural networks ñ that is,φcan be approximated by some element
inF. Then the computer-orientedbackpropagation algorithmsof multi-
layer neural networks are designed to implement this approximation.
Instead of having the desired pairs, or sample data, suppose we only have
expertsí knowledge as information tofindφ. Thenweareinthedomainof
applicability of fuzzy control. The set of models is created by expertsí linguistic
ìIf... then... î rules. A similar universalapproximation property of fuzzy sys-
tems exists. However, the search for a good approximation is done by tuning ñ
that is, there is no systematic procedure to reach the approximate functionφN.
This drawback can be compensated by using neural networks in fuzzy control,
leading to what is calledneural-fuzzy control, in which the formation used to
constructφis a fuzzy rule base represented in a neural network structure so that
the systematic search by backpropagation algorithms of neural networks can be
used tofind a good approximation forφ. Of course, this requires experiments
on the controlled systems to extract samples. The advantage to this approach
is that we have incorporated all available information (expertsí knowledge and
numerical data) in our process of constructing the control lawφ.Thisprocess
replaces the tuning process and is referred to asoptimization of fuzzy rules.
Indeed, when linguistic labels in fuzzy rules are modeled by parametric mem-
bership functions, the learning algorithm of neural networks will optimize the
parameters in fuzzy rules, leading to the desired approximation ofφ.
With the above program in mind, it is necessary to assess the learning ca-
pability of neural networks. An excellent study of this issue can be found in [4].
Roughly speaking, although the universal approximation property states that
neural networks can approximate a large class of functions, in practice we also
need to know how much training data is needed to obtain a good approximation.
This clearly depends on the complexity of the neural network considered.
To give a measure of this complexity, we will define the Vapnik-Chervonenkis
dimension of the class of functions computable by the neural network. Consider
the perceptron withnreal inputs and binary outputs{ 0 , 1 }. Suppose we have
a training sampleT={(xi,yi),i=1,...,n}withxi∈Rnandyi∈{ 0 , 1 }.Let
S={x 1 ,x 2 ,...,xm}⊂Rnwith|S|=m,andletFbe the class of all functions
computable (representable) by the perceptron. Here,Fcan be thought of as a
classofsubsetsofRn, since each element ofFis a function fromRnto{ 0 , 1 }.
LetFS be the class of restrictions of functions inF toS, and identify the
classFSwith a collection of subsets ofS.If|FS|=2|S|=2m,wesaythatF
shattersS.Thegrowth functionof the classFis a special map


GF:N→N

whereN={ 0 , 1 , 2 ,...}is the set of natural numbers. The growth function is

Free download pdf