Optimizing Optimization: The Next Generation of Optimization Applications and Theory (Quantitative Finance)

(Romina) #1

Optimal solutions for optimization in practice 89


which is the maximization of a linear expression subject to upper bounds on
the Euclidean norms of general linear terms and theses constraints are equiva-
lent to bounds on positive definite quadratic forms since:


()()xbCxb M
Rx Rb M

T


2

where C  R T R
We can minimize a positive definite quadratic form by noting the following:


λλaxTTx b Cx b Rx Rb R aT TTλR aλab

1
2

1
2

1
2

()()(^12 )^2 ( )^12

Then, because the last two terms are independent of x ,


minimize λaxTT(x b Cx b) ( )

1

(^2)
becomes
maximizezRsubject to xRbRazλ T^1
which is an n  2-dimensional cone constraint where z is an extra dummy scalar
variable. The two quadratic constraints in our robust optimization become two
n  1-dimensional cone constraints and each linear constraint can be written as:
ax ul ul
1
2
1
2
() ()
which is a two-dimensional cone constraint where a is either a unit vector or a
row of A T , and u and l are upper and lower bounds. We are thus able to pro-
gram our robust optimization as the dual of the standard SOCP.


Appendix B: BITA GLO


The following relationship holds.


Expected returnTarget Gain Loss

Therefore, if we define expected utility by V where V  Gain  (1  λ )Loss
Then , it follows that:


VExpected return−−Target λLoss
Free download pdf