Optimizing Optimization: The Next Generation of Optimization Applications and Theory (Quantitative Finance)

(Romina) #1

150 Optimizing Optimization


Both Ω and D are T  T matrices. Starting with an initial guess, μ (1) , the
optimal μ is found by iterating the equation:


μΩμ μ

(iii 11 )[()]()Dy,,() Dy ()y

until convergence. I define Dyˆ() the matrix D upon convergence of μ ( i ) to μˆ.
The repeated inversion of the T  T matrix in the formula can be efficiently
carried out by using the KFS. 5 To this end, it is convenient to set up an auxil-
iary linear state space form at each iteration:


yu
v

ttt
tt t




δ
δδ 1
(6.4)

where


Var uttI y t
() 10 /|ωμ( ()i < )|

and


Var v()t q

The unobservable state δ (^) t replaces μ (^) t. The model in Equation (6.4) is just a
convenient device that can be used to carry out the computation efficiently. It
can be shown that the linear KFS applied to Equation (6.4) yields the optimal
μ characterized above.
The parameter q can be estimated from the data by cross-validation. The
method is very intuitive: It consists of dropping one observation at a time from
the sample and thus re-estimating the time-varying expectile with a missing
observation T times. The T residuals are then used to compute the objective
function:
CV q yttt
t
T
ωω() ρμ( )
()


1

where μ()tt is the estimated value at time t when y t is dropped. CV ω ( q )
depends on q through the estimator μt
()t


. De Rossi and Harvey (2006) devise a
computationally efficient method to minimize CV ω ( q ) with respect to q.


6.3 The asset allocation problem


This section illustrates a method for finding, given a set of basic assets, the
portfolio with the lowest estimated risk — where risk is measured as EVaR. It is


5 The connection between spline smoothing and signal extraction in linear Gaussian systems has
been thoroughly investigated in the statistical literature. An early example is Wahba (1978).

Free download pdf