Engineering Optimization: Theory and Practice, Fourth Edition

(Martin Jones) #1
12.3 Optimal Control Theory 679

which minimizes the functional, called theperformance index,


J=

∫T

0

f 0 ( x,u,t) dt (12.21)

where


x=










x 1
x 2

xn










is called thestate vector,tthe time parameter,Tthe terminal time, andf 0 is a function
ofx, u, andt. The state variablesxiand the control variablesuiare related as


dxi
dt

=fi(x 1 , x 2 ,... , xn;u 1 , u 2 ,... , um; ), it = 1 , 2 ,... , n

or


x ̇=f(x,u, t) (12.22)

In many problems, the system is linear and Eq. (12.22) can be stated as


x ̇=[A]x+[B]u (12.23)

where [A] is ann×nmatrix and [B] is ann×mmatrix. Further, while finding the
control vectoru, the state vectorxis to be transferred from a known initial vectorx 0
at t= 0 to a terminal vectorxT at t=T,where some (or all or none) of the state
variables are specified.


12.3.1 Necessary Conditions for Optimal Control


To derive the necessary conditions for the optimal control, we consider the following
simple problem:


Finduwhich minimizesJ=

∫T

0

f 0 (x, u, t) dt (12.24)

subject to


x ̇=f (x, u, t) (12.25)

with the boundary conditionx(0)= k 1. To solve this optimal control problem, we
introduce a Lagrange multiplierλand define an augmented functionalJ∗as


J∗=

∫T

0

{f 0 (x, u, t)+λ[f(x, u, t)− ̇x]}dt (12.26)

Since the integrand


F=f 0 + λ(f−x) ̇ (12.27)
Free download pdf