Optimizing Optimization: The Next Generation of Optimization Applications and Theory (Quantitative Finance)

(Romina) #1

284 Optimizing Optimization


He showed that investor preferences (expected utility) can be approximated
locally by mean and variance of risky investments alone. This approximation
works well as long as returns are not “ too nonnormal ” (certainly violated with
nonlinear payoffs) or “ too volatile ” (or equivalently with a “ too long ” time
horizon), which would make a local approximation increasingly bad behaved
for not so infinitesimal (larger) returns as pointed out in Markowitz and Levy
(1979). The computational convenience offered by Markowitz inspired count-
less applications. However, academics and practitioners alike soon started to
ignore the fact that mean – variance investing was designed as an approxima-
tion to maximizing expected utility under the above conditions. Instead, they
expressed their disapproval of variance as a measure of risk and invented
(rather than derived from first principles) new statistical risk measures. 1 I call
them statistical risk measures as they have been selected to capture particular
features of a return distribution and not whether they more closely resemble
expected utility maximization. Apart from a few dispersion measures (e.g.,
mean – absolute deviation rather than variance), a large variety of downside risk
measures have been engineered. These measures in particular had no relation to
plausible utility functions. Reconciliation with an expected utility optimization
framework has therefore been ex post. Academia invented utility functions that
would support a given measure to provide a theoretic fig leave. However, on
close inspection these new utility functions are at odds with investors ’ prefer-
ences. An example for this or more provocatively a low point in this develop-
ment was the “ invention ” of value at risk ( VaR ). It is defined as the maximum
loss that will not be exceeded with a specified confidence (usually 95%) at
the end of a given time period (usually a year). 2 Investors are assumed not to
worry about the extent of losses exceeding VaR , which in fact makes them
risk-seeking in the tail of a return distribution. We are very unlikely to meet
this type of investors in reality. In any case, there was little evidence that mean
and VaR would approximate utility better than mean and variance for realistic
problems. Given the rise in statistical rather than economic risk measures,
Artzner, Delbaen, Eber, and Heath (1997) developed a set of statistical axioms
designed to address the shortcomings of statistical risk measures and, in
particular, value at risk. For a risk measure to be coherent (pass all axioms), it
must be monotonous (larger losses means larger risks), positive homogeneous
(increasing positions by a multiple must increase risk by the same multiple 3 ),
invariant to translations (adding cash does not reduce risk), and subadditive


1 Observing that returns are not really normally distributed, many authors addressed the short-
comings of variance without relating their measures to expected utility maximization. Portfolio
selection then wrongly became a field for statistics, engineering, and operations research. While
expected utility maximization got pushed into the background, it nevertheless was always there
as the only way to correctly solve the problem, showing von Neumann’s original genius.
2 We use the 95% confidence limit throughout this Chapter.
3 The credit crisis in 2008 has revealed that this axiom is likely to be the weakest, given that liquid-
ity might dry up for very large positions.

Free download pdf