224 CHAPTER 6. NEURAL CONTROL
nonlinear neural network model at each sampling time.
Assume that a neural network input-output model of the system to be
controlled is described as a function of past outputsy(t−i)and past inputs
u(t−d−i)in the form
y(t)=g(x(t))
where the vector
x(t)=[y(t−1),...,y(t−n),u(t−d),...,u(t−d−m)]T
defines the state of the system. At timet=τ, linearize the function
g=g(x 1 ,...,xn+m+1)
around the current statex(τ)to obtain the approximate model
yò(t)=−a 1 yò(t−1)−...−anyò(t−n)+b 0 uò(t−d)+...+òu(t−d−m)
where
yò(t−i)=y(t−i)−y(τ−i)
òu(t−i)=u(t−i)−u(τ−i)
ai = −
∂g(x(t))
∂xi
Ø
Ø
Ø
Ø
t=τ
for 1 ≤i≤n
bi = −
∂g(x(t))
∂xn+i+1
Ø
ØØ
Ø
t=τ
for 0 ≤i≤m
For a multi-layer perceptron (MLP) network withnxinputs, one hidden
layer ofnhtanhunits, and a linear output
y(t)=
Xnh
j=1
Wjtanh
√nx
X
k=1
wkjxk(t)+w 0 j
!
+W 0
the derivative of the output with respect to inputxi(t)is calculated in accor-
dance with
∂g(x(t))
∂xi(t)
=
Xnh
j=1
Wjwji
√
1 −tanh^2
√nx
X
k=1
wjkxk(t)+wj 0
!!
where
(x 1 ,...,xnx)=(y 1 ,...,yn,u 0 ,u 1 ,...,um)
The approximate model can also be expressed as
y(t)=−
Xn
i=1
aiy(t−i)+
Xm
i=0
biu(t−i)+
√
y(τ)+
Xn
i=1
aiy(τ−i)−
Xm
i=0
biu(τ−i)