194 CHAPTER 5. NEURAL NETWORKS FOR CONTROL
- Consider the following training set consisting of bipolar input-output pairs
{(xq,yq),q=1, 2 , 3 , 4 }
x=(x 1 ,x 2 ) y
(1,1) 1
(1,−1) − 1
(− 1 ,1) − 1
(− 1 ,−1) − 1
Find the weight vector that minimizes the errorE.
- Letf 1 (x)=
1
1+e−x
andf 2 (x)=
2
1+e−x
− 1
(a) Compute the derivatives off 1 andf 2.
(b) Sketch the graph off 2.
(c) Compute the derivative of the hyperbolic tangent function
f 3 (x)=
ex−e−x
ex+e−x
(d) Verify that
f 2 (x)=2f 1 (x)−1=
1 −e−x
1+e−x
(e) Verify thatf 3 (x)=f 2 (2x).
- Letf 1 (x)=
1
1+e−x
.Forab∈R,letα=b−aandβ=−a. Show that
the range of the function
g(x)=αf 1 (x)−β
is the open interval(a,b).
- Given the following function
y 1 =4sin(πx 1 )+2cos(πx 2 )
(a) Obtain a set of 20 input-output training data pairs for random vari-
ation of(x 1 ,x 2 )in the interval[− 1 ,1]. Train a single-hidden-layered
neural network with bipolar sigmoidal functions to the lowest value
of tolerance required. You may wish to choose a value of 1. 0 E− 06
as a start.
(b) Obtain a test set of data from the given function with a different seed
used in the random number generator. Test the function approxima-
tion capability of the neural network. Can the approximation capa-
bility be improved by training the neural network with more data?
Does over-training cause degradation in performance?