224 CHAPTER 6. NEURAL CONTROL
nonlinear neural network model at each sampling time.
Assume that a neural network input-output model of the system to be
controlled is described as a function of past outputsy(t−i)and past inputs
u(t−d−i)in the form
y(t)=g(x(t))
where the vector
x(t)=[y(t−1),...,y(t−n),u(t−d),...,u(t−d−m)]Tdefines the state of the system. At timet=τ, linearize the function
g=g(x 1 ,...,xn+m+1)around the current statex(τ)to obtain the approximate model
yò(t)=−a 1 yò(t−1)−...−anyò(t−n)+b 0 uò(t−d)+...+òu(t−d−m)where
yò(t−i)=y(t−i)−y(τ−i)
òu(t−i)=u(t−i)−u(τ−i)ai = −∂g(x(t))
∂xiØ
Ø
Ø
Ø
t=τfor 1 ≤i≤nbi = −∂g(x(t))
∂xn+i+1Ø
ØØ
Ø
t=τfor 0 ≤i≤mFor a multi-layer perceptron (MLP) network withnxinputs, one hidden
layer ofnhtanhunits, and a linear output
y(t)=Xnhj=1Wjtanh√nx
Xk=1wkjxk(t)+w 0 j!
+W 0
the derivative of the output with respect to inputxi(t)is calculated in accor-
dance with
∂g(x(t))
∂xi(t)=
Xnhj=1Wjwji√
1 −tanh^2√nx
Xk=1wjkxk(t)+wj 0!!
where
(x 1 ,...,xnx)=(y 1 ,...,yn,u 0 ,u 1 ,...,um)
The approximate model can also be expressed as
y(t)=−
Xni=1aiy(t−i)+Xmi=0biu(t−i)+√
y(τ)+Xni=1aiy(τ−i)−Xmi=0biu(τ−i)