The Mathematics of Financial Modelingand Investment Management

(Brent) #1

7-Optimization Page 214 Wednesday, February 4, 2004 12:50 PM


214 The Mathematics of Financial Modeling and Investment Management

t 1

Jy = ∫f 0 [tx t, (),u ()t] td + St[ 1 , xt() 1 ]

t 0

A functional is a mapping from a set of functions into the set of real
numbers; it associates a number to each function. The definite integral is
an example of a functional.
To solve the above optimal control problem, a useful strategy is to find
a set of differential equations that must be satisfied by the control. Two
major approaches for solving this problem are available: Bellman’s
Dynamic Programming^7 and Pontryagin’s Maximum Principle.^8 The
former approach is based on the fact that the value of the state variable at
time t captures all the necessary information for the decision-making from
time t and onward: The paths of the control vector and the state variable
up to time t do not make any difference as long as the state variable at time
t is the same. Bellmann showed how to derive from this observation a par-
tial differential equation that uniquely determines the control. Pontryagin’s
Maximum Principle introduces additional auxiliary variables and derives
differential equations via the calculus of variations that might be simpler to
solve than those of Bellmann’s dynamic programming.

STOCHASTIC PROGRAMMING


The model formulations discussed thus far assume that the data for the
given problem are known precisely. However, in financial economics, data
are stochastic and cannot be known with certainty. Stochastic program-
ming can be used to make optimal decisions under uncertainty. The fun-
damental idea behind stochastic programming is the concept of stages
and recourse. Recourse is the ability to take corrective action at a future
time, that is, a decision stage, after a random event has taken place.
To formulate problems of dynamic decision-making under uncer-
tainty as a stochastic program, we must first characterize the uncertainty
in the model. The most common method is to formulate scenarios and to
assign to each scenario a probability. A scenario is a complete path of
data. To illustrate the problem of stochastic programming, let’s consider

(^7) R. Bellman, Dynamic Programming (Princeton, NJ: Princeton University Press,
1957).
(^8) For a discussion of Pontryagin’s Maximum Principle see, for instance: E.B. Lee, and
L. Marcus, Foundations of Optimal Control Theory (New York: John Wiley &
Sons, 1967).

Free download pdf