6.4. EXAMPLE: TEMPERATURE CONTROL 211
Plots of the proportional gain versus thegradient and integral gain versus the
pattern vector are then obtained, asillustrated in Figures 6.10 and 6.11.
-0 .4^0 -0 .2 0 0.2 0.4 0.6
1
2
3
4
5
6
Figure 6.10. Proportional gain versus gradient of the pattern vector
0.4-0 .6 -0 .4 -0 .2 0 0.2 0.4 0.6 0.8
0.5
0.6
0.7
0.8
0.9
1
Figure 6.11. Integral gain versus pattern vector
We then set up two feedforward neural networks according to theMatlab
neural network function ìnewffî syntax:
net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF).
In our example, a two-layer feedforward network is created. Thefirst hidden
layer has 10 ìtansigî neurons, and the output layer has one ìposlinîneuron.
We once again use ìtrainlmî as the default neural network training function.
The default backpropagation learning function ìlearngdmî and mean-squared
error performance function ìmseî are used as before. Thefirst neural network
ìnet1î is used to control the proportional gain and the second neural network
ìnet2î is used to control the integral gain.
net1 = newff([min(dp) max(dp)],[10 1],{ítansigí,íposliní},
ítrainlmí,ílearngdmí,ímseí);
net2 = newff([min(p) max(p)],[10 1],{ítansigí,íposliní},
ítrainlmí,ílearngdmí,ímseí);
As noted in the previous example, it is best to initialize all neural network
parameters prior to training. We initialize both the neural networks as follows: