The Mathematics of Financial Modelingand Investment Management

(Brent) #1

7-Optimization Page 205 Wednesday, February 4, 2004 12:50 PM


Optimization 205

∂f ∂f 
∇f= ---------, ...,--------- = ( 0 , ..., 0 )
∂x 1 ∂xn 

Let’s now discuss how to find maxima and minima when the optimi-
zation problem has equality constraints. Suppose that the n variables
(x 1 ,...,xn) are not independent, but satisfy m < n constraint equations

g 1 (x 1 ,...,xn) = 0
.
.
.
gm(x 1 ,...,xn) = 0

These equations define, in general, an (n-m)-dimensional surface.
For instance, in the case of two variables, a constraint g 1 (x,y) = 0
defines a line. In the case of three variables, one constraint g 1 (x,y,z) = 0
defines a two-dimensional surface while two constraints g 1 (x,y,z) = 0,
g 2 (x,y,z) = 0 define a line in the three-dimensional space, and so on.
Our objective is to find the maxima or minima of the function f for
the set of points that also satisfy the constraints. It can be demonstrated
that, under this restriction, the gradient ∇f of f need not vanish at the
maxima or minima, but need only be orthogonal to the (n-m)-dimen-
sional surface described by the constraint equations. That is, the follow-
ing relationships must hold

∇f= λλλλT∇g, for some λλλλ= (λ 1 , ...λ, m )

or, in the usual notation

m
∂f ∂gj

-------- = ∑λj --------, i = 1,...,n

∂xi j = 1 ∂xi

The coefficients (λ 1 ,...,λm) are called Lagrange multipliers.
If we define the function

m

Fx( 1 , ...,xn, λ 1 , ...λ, m )= fx( 1 , ...,xn )– ∑λjgj

j = 1
Free download pdf