Optimizing Optimization: The Next Generation of Optimization Applications and Theory (Quantitative Finance)

(Romina) #1

Portfolio optimization with “ Threshold Accepting ” : a practical guide 217


Increasing I should lead to better solutions. In particular, with an increas-
ing number of iterations, the mean solution per restart should decrease while
the solutions ’ variance should also decrease and eventually go to zero as the
number of iterations goes to infinity. Thus, for every restart, the probability of
obtaining a good solution increases.
Unfortunately , there is no general rule like “ 2,000 iterations will give a good
solution. ” Fortunately, investigating the convergence behavior is straightfor-
ward: run a reasonably large number of optimizations (i.e., conduct a large
number of restarts) for a given fixed number of iterations. For each of these
runs, store the obtained objective function value. Every restart is a draw from
Fi , where the superscript indicates the chosen number of iterations. Hence,
the empirical CDF of these collected values is an estimator of Fi. To give an
example, in Gilli and Schumann (2008) , we ran such tests for a portfolio prob-
lem where we minimized the Omega function ( Keating & Shadwick, 2002 ).
Figure 9.3 illustrates the empirical distribution functions of the solutions for
100,000, 50,000, 35,000, 20,000, and 10,000 iterations.
We can clearly see how the distribution gets steeper (i.e., variance decreases),
and moves to the left (i.e., average solution decreases). With many iterations, the
remaining variability becomes very small, though it is unlikely to disappear alto-
gether. For all practical purposes, however, when working with real (noisy) data,
this remaining variability becomes negligible ( Gilli & Schumann, 2009b ). When
implementing TA , such experiments can help to decide how many iterations to use.
There is a second issue to be investigated: Assume we have a fixed amount of
computational resources at our disposal, then it is also important to know how
to allocate these resources among restarts, rounds, and steps. Intuitively, we
may use fewer iterations and thus have a “ worse ” distribution of solutions, but
still by restarting several times produce better results than from just running
one optimization with many iterations. Formal studies on this question gener-
ally find that giving more iterations and using fewer restarts works better, see
Winker (2001, pp. 129 – 134 ). A caveat is in order, though: these studies assume
that TA is properly implemented and works correctly for the given problem. In
a testing phase, several restarts are definitely advisable.


0.445 0.4538 0.475

0.056

0.144

0.318

0.767

0.99

p
100000 (leftmost)
35000
20000
15000
10000 (rightmost)

Φ

Figure 9.3 Distribution of solutions for increasing number of iterations.

Free download pdf