The Mathematics of Financial Modelingand Investment Management

(Brent) #1

7-Optimization Page 202 Wednesday, February 4, 2004 12:50 PM


202 The Mathematics of Financial Modeling and Investment Management

both f and the constraints are linear. Quadratic programming is the spe-
cialization of mathematical programming to instances where f is a qua-
dratic function. The Markowitz mean-variance approach leads to a
quadratic programming problem.
A different, and more difficult, problem is the optimization of a
dynamic process. In this case, the objective function depends on the entire
realization of a process, which is often not deterministic but stochastic.
Decisions might be taken at intermediate steps on the basis of information
revealed up to that point. This is the concept of recourse, that is, revision of
past decisions. This area of optimization is called stochastic programming.
From an application perspective, mathematical programming is an
optimization tool that allows the rationalization of many business or
technological decisions. The computational tractability of the resulting
analytical models is a key issue in mathematical programming. The sim-
plex algorithm, developed in 1947 by George Dantzig, was one of the
first tractable mathematical programming algorithms to be developed
for linear programming. Its subsequent successful implementation con-
tributed to the acceptance of optimization as a scientific approach to
decision-making and initiated the field known as operations research.
Optimization is a highly technical subject, which we will not fully
develop in this chapter. Instead, our objective is to give the reader a gen-
eral understanding of the technology. We begin with an explanation of
maxima or minima of a multivariate function subject to constraints. We
then discuss the basic tools for static optimization: linear programming
and quadratic programming. After introducing the idea of optimizing a
process and defining the concepts of the calculus of variations and con-
trol theory, we briefly cover the techniques of stochastic programming.^4

MAXIMA AND MINIMA


Consider a multivariate function f(x 1 ,...,xn) of n real-valued variables. Sup-
pose that f is twice differentiable. Define the gradient of f, gradf, also written
∇f, as the vector whose components are the first order partial derivatives of f

∂f ∂f 
grad[fx( 1 , ..., xn )] = ∇f= ---------, ..., ---------
∂x 1 ∂xn 

(^4) For a good introduction to stochastic programming, see, among others, J.R. Birge
and F. Louveaux, Introduction to Stochastic Programming (Heidelberg: Springer,
1997) and Peter Kall and Stein W. Wallace, Stochastic Programming (Chichester,
West Sussex: Wiley, 1995).

Free download pdf