220 CHAPTER 6. NEURAL CONTROL
corresponding vector of outputs. Select a suitable number of vector elements. In
this example we have chosen to develop a vector of 301 random input elements.
u=rands(1, 301)*2; %u(k)∈[− 2 ,2]
Step 3. Simulate the response of the nonlinear system using the random input
vector generated in Step 2 and create the input-output pairs of training data.
for k=2:301
yp(k+1)=yp(k)(yp(k-1)+2)(yp(k)+2.5)/(8.5+yp(k)^ 2
+yp(k-1)^2)+u(k);
out(k-1)=(yp(k+1)-u(k))/20; %Output training data
in(k-1)=yp(k)/20; %Input training data
end;
Notice that the output and input training data have been scaled, in this case,
by a factor of 20. Scaling is needed when using sigmoidal functions so that we
create an adequate spread in the ìsquashedî data. The user must experiment
with this and select a suitable scaling factor that will help in the neural network
convergence. Over-scaling or under-scaling can significantly affect the network
convergence properties.
Step 4. Set up the input-output data vectors for neural network training. Here
the input data ìplantinî is set up as pairs of the form
∑Ω
yp(k)
yp(k+1)
æ∏
namely,
∑Ω
in(1)
in(2)
æ
,
Ω
in(2)
in(3)
æ
,
Ω
in(3)
in(4)
æ
,∑∑∑,
Ω
in(298)
in(299)
æ
,
Ω
in(299)
in(300)
æ∏
and the corresponding output ìplantoutî for each input pair is set up as
[out(1),out(2),out(3),∑∑∑,out(299)]
These are the pattern and target vectors that the neural network uses for train-
ing. The followingMatlabcode performs this setup.
plantin=[in(1:299); in(2:300)]; plantout=out(1:299);
Step 5.Asafirst attempt to obtain a suitable trained neural network, here we
need to choose the network topology. The number of layers and the number of
neurons in each layer are at issue. Choosing a large number of neurons obviously
increases the computations and hence affects the time to converge. For a trial,
wechoose1neuroninthefirst hidden layer, 10 neurons in the second hidden
layer, and 1 output neuron. The selection of the number of output neurons is
based upon the number of outputs in the system. Select the appropriate activa-
tion function that characterizes the input-output behavior. For this example, a
bipolar activation function is necessary and hence theMatlabìtansigî func-
tion is chosen in all three layers. The ìtrainlmíí algorithm is a network training