Optimizing Optimization: The Next Generation of Optimization Applications and Theory (Quantitative Finance)

(Romina) #1

210 Optimizing Optimization


9.3 Threshold accepting


9.3.1 The algorithm


TA is a local search method. A local search starts with a random feasible solu-
tion x c (i.e., a random portfolio), which we call the “ current solution, ” as it
represents the best we have so far. Then again randomly, a new solution x n
close to x c is chosen. What “ close to x c ” means will be discussed shortly. This
new solution is called a neighbor solution. If x n is better than x c , the new solu-
tion is accepted (i.e., the neighbor becomes the current solution), if not, it is
rejected, and another neighbor is selected. For a given objective function, a
local search is completely described by how it chooses a neighbor solution and
by its stopping criterion (i.e., the rule that states when the search is finished).
Algorithm 1 gives this procedure in pseudocode. The stopping criterion here is


a preset number of iterations n (^) Steps ; the final x c becomes our overall solution
x sol. The symbol X stands for the set of all feasible portfolios.


Algorithm 1 Local Search.

1: initialize n (^) Steps
2: randomly generate current solution xc∈X
3: for i  1 : n (^) Steps do
4: generate xx
nc∈N() and compute Δ  Φ ( x n )  Φ ( x c )
5: if Δ 0 then x c  x n
6: end for
7: x sol  x c
In a convex problem, a local search will, given an appropriate neighborhood
function and enough iterations, succeed in finding the global minimum, even
though it is certainly not the most efficient method. But we are compensated
for this inefficiency: all the method requires is that the objective function can
be evaluated for a given portfolio x ; there is no need for the objective func-
tion to be continuous or differentiable. Unfortunately, for problems with many
local minima, a local search will stop at the first local optimum it finds.
TA now builds on this approach, but it adds a simple strategy to escape such
local minima: It will not only accept a new solution that improves on the cur-
rent one, but it will also allow “ uphill moves, ” as long as the deterioration of
Φ does not exceed a fixed threshold (hence the method’s name). Over time, this
threshold decreases until eventually TA turns into a classical local search. The
whole procedure is summarized in Algorithm 2.

Free download pdf