Python for Finance: Analyze Big Financial Data

(Elle) #1

portfolio statistics for an input weights vector/array:


In  [ 47 ]: def statistics(weights):
”’ Returns portfolio statistics.

                                                    Parameters
==========
weights : array-like
weights for different securities in portfolio

                                                    Returns
=======
pret : float
expected portfolio return
pvol : float
expected portfolio volatility
pret / pvol : float
Sharpe ratio for rf=0
”’
weights = np.array(weights)
pret = np.sum(rets.mean() * weights) * 252
pvol = np.sqrt(np.dot(weights.T, np.dot(rets.cov() * 252 , weights)))
return np.array([pret, pvol, pret / pvol])

The derivation of the optimal portfolios is a constrained optimization problem for which


we use the function minimize from the scipy.optimize sublibrary (cf. Chapter 9):


In  [ 48 ]: import scipy.optimize as sco

The minimization function minimize is quite general and allows for (in)equality


constraints and bounds for the parameters. Let us start with the maximization of the Sharpe


ratio. Formally, we minimize the negative value of the Sharpe ratio:


In  [ 49 ]: def min_func_sharpe(weights):
return -statistics(weights)[ 2 ]

The constraint is that all parameters (weights) add up to 1. This can be formulated as


follows using the conventions of the minimize function (cf. the documentation for this


function).


[ 42 ]

In  [ 50 ]: cons    =   ({‘type’:   ‘eq’,   ‘fun’:  lambda x:       np.sum(x)   -    1 })

We also bound the parameter values (weights) to be within 0 and 1. These values are


provided to the minimization function as a tuple of tuples in this case:


In  [ 51 ]: bnds    =   tuple(( 0 ,  1 )    for x in range(noa))

The only input that is missing for a call of the optimization function is a starting parameter


list (initial guesses for the weights). We simply use an equal distribution:


In  [ 52 ]: noa *   [1. /   noa,]
Out[52]: [0.2, 0.2, 0.2, 0.2, 0.2]

Calling the function returns not only optimal parameter values, but much more. We store


the results in an object we call opts:


In  [ 53 ]: %%time
opts = sco.minimize(min_func_sharpe, noa * [1. / noa,], method=‘SLSQP’,
bounds=bnds, constraints=cons)
Out[53]: CPU times: user 52 ms, sys: 0 ns, total: 52 ms
Wall time: 50.3 ms

Here are the results:


In  [ 54 ]: opts
Out[54]: status: 0
success: True
njev: 6
nfev: 42
Free download pdf