Optimizing Optimization: The Next Generation of Optimization Applications and Theory (Quantitative Finance)

(Romina) #1

Portfolio optimization with “ Threshold Accepting ” : a practical guide 219


properly. Such constraints may not just limit position sizes, but if applicable,
we can also include constraints on aggregate Greeks like Delta or Gamma, or a
minimum required performance in added artificial crash-scenarios.


9.5.3 Degenerate objective functions


Numerically , ratios have some unfavorable properties when compared with
linear combinations: If the numerator becomes zero, the ratio becomes zero
and the search becomes unguided; if the denominator is zero, we get an error
(inf). If the sign of numerator or denominator changes over the course of the
search, the ratio often becomes uninterpretable; an example is the Sharpe ratio
for negative mean excess return.
So we generally need safeguards. To avoid sign problems, we can use centered
quantities: Lower partial moments, for example, are computed as r d  r for returns


lower than r (^) d , thus the numbers will always be nonnegative. An alternative is to
use operations like max( · , 0) or min( · , 0) to assure the sign of some quantity.
There is, however, also a valuable aspect in these instabilities, for they are
not only of a numerical nature. In fact, problems when computing a ratio may
indicate problems with the model or the data, which would go unnoticed with
a linear combination. For instance, if a risk – reward ratio turns zero, this means
we have found a portfolio with no risk at all, which, unfortunately, is often an
indication of a peculiar data sample rather than a really excellent portfolio.


9.5.4 The neighborhood and the thresholds


It is helpful to have some insight into the local structure (the “ surface ” ) of the
objective function in the search space. For low-dimensional problems, we can
plot the objective function as in Figure 9.1 ; with more dimensions, we can take
random walks through the data (like in Algorithm 4). Thus, we start with a
random porfolio and move through the search space according to our neighbor-
hood function, accepting any new portfolio. The changes in the objective func-
tion accompanying every step should be visually inspected, for instance with
histograms or CDF-plots. A large number of zero changes indicates a flat sur-
face, i.e., in such regions of the search space, the algorithm will get no guidance
from the objective function. Another sign of potential trouble is the clustering
of changes, i.e., if we have a large number of very small changes and a large
number of very large changes, which may indicate a bad scaling of the problem.
The observed changes should be of a roughly similar magnitude as the
thresholds — which is why such random walks are often used to inform the
threshold setting, as was described in Algorithms 3 and 4. If the thresholds
are too large compared with average changes, our optimization algorithm
will become a random search, since any new portfolio will be accepted. If the
thresholds are too small, the algorithm will be too restrictive, and become stuck
in local minima.
During the actual optimization run, it is good practice to store the accepted
changes in the objective function values (i.e., the accepted Δ -values in Algorithm 2).

Free download pdf