A First Course in FUZZY and NEURAL CONTROL

(singke) #1
272 CHAPTER 8. APPLICATIONS

estimator (Kalmanfiltering algorithm) described on page 263. There are a total
of 48 consequent parameters and 21 premise parameters for the fuzzy partition
described above. These parameters, namely, the width, center, and slope of
the generalized bell function, are updated, or tuned, using the gradient descent
method. In the backward pass, the error signals propagate backwards and the
error at each layer is used to update the premise parameters.


Membership updates The selection of the membership function depends on
the application and the quality of data. As mentioned by one of the reviewers
of the original ANFIS paper [35], the learning mechanism should not be applied
to determine membership functions in the Sugeno ANFIS, since they convey
linguistic and subjective descriptions of possibly ill-defined concepts. However,
if the size of the data set is large, thenfine-tuning of the membership functions
is recommended (or even necessary) since human-determined membership func-
tions are seldom optimal in terms of reproducing desired outputs. If the data
set is too small, then it probably does not contain enough information about the
target system. In such a situation, the membership functions determined might
not represent important information pertaining to the data set. In a case like
this, it is recommended that the membership functions befixed throughout the
learning process. If the membership functions arefixed and only the consequent
part is updated, the Sugeno ANFIS can be viewed as a functional-link network,
where the ìenhanced representationsîof the input variables are obtained via
the membership functions. These enhanced representations determined by hu-
manexpertsapparentlyprovidemoreinsightintothetargetsystemthanthe
functional expansion or the tensor (outer product) models. By updating the
membership functions, we are actually tuning this enhanced representations for
better performance [36].


ANFIS results The training data previously formulated to train the back-
propagation neural network is used totrain the ANFIS and tune the premise
and consequent parameters. In the computation of the consequent parameters


aji,i=0,∑∑∑, 3 and j=1,∑∑∑, 12

using Equation 8.9, the outputs of each layer, until Layer 3, are computed for all
patterns. The target vectorsd,which are the desired value for each trash type,
are fuzzy singletons. The desired value during training is 0.2 for bark objects, 0.4
for stick objects, 0.6 for leaf objects, and 0.8 for pepper objects. The consequent
parameters are calculated by solving the over-constrained equation to obtain the
consequent parameters. Table 8.7 illustrates thefinal values of the consequent
parameters after 200 epochs of training for the 232-partition.
The outputs of Layer 4 are calculated, based on the consequent parameters.
The errors backpropagated at each layer are calculated using the gradient de-
scent method. The premise parameters are updated in Layer 1, based on the
errors at the input Layer 1. Table 8.8 shows the initial andfinal values of the
premise parameters after 200 epochs of training.

Free download pdf