# Other Optimisers¶

In addition to optimisers that rely on the covariance matrix in the style of Markowitz, recent developments in portfolio optimisation have seen a number of alternative optimisation schemes. PyPortfolioOpt implements some of these, though please note that the implementations may be slightly unstable.

Note

As of v0.4, these other optimisers now inherits from BaseOptimizer or BaseScipyOptimizer, so you no longer have to implement pre-processing and post-processing methods on your own. You can thus easily swap out, say, EfficientFrontier for HRPOpt.

## Value-at-Risk¶

The value-at-risk is a measure of tail risk that estimates how much a portfolio will lose in a day with a given probability. Alternatively, it is the maximum loss with a confidence of beta. In fact, a more useful measure is the expected shortfall, or conditional value-at-risk (CVaR), which is the mean of all losses so severe that they only occur with a probability $$1-\beta$$.

$CVaR_\beta = \frac{1}{1-\beta} \int_0^{1-\beta} VaR_\gamma(X) d\gamma$

To approximate the CVaR for a portfolio, we will follow these steps:

1. Generate the portfolio returns, i.e the weighted sum of individual asset returns.
2. Fit a Gaussian KDE to these returns, then resample.
3. Compute the value-at-risk as the $$1-\beta$$ quantile of sampled returns.
4. Calculate the mean of all the sample returns that are below the value-at-risk.

Though CVaR optimisation can be transformed into a linear programming problem , I have opted to keep things simple using the NoisyOpt library, which is suited for optimising noisy functions.

Warning

Caveat emptor: this functionality is still experimental. Although I have used the CVaR optimisation, I’ve noticed that it is very inconsistent (which to some extent is expected because of its stochastic nature). However, the optimiser doesn’t always find a minimum, and it fails silently. Additionally, the weight bounds are not treated as hard bounds.

The value_at_risk module allows for optimisation with a (conditional) value-at-risk (CVaR) objective, which requires Monte Carlo simulation.

class pypfopt.value_at_risk.CVAROpt(returns, weight_bounds=(0, 1))

A CVAROpt object (inheriting from BaseScipyOptimizer) provides a method for optimising the CVaR (a.k.a expected shortfall) of a portfolio.

Instance variables:

• Inputs
• tickers
• returns
• bounds
• Optimisation parameters:

• s: the number of Monte Carlo simulations
• beta: the critical value
• Output: weights

Public methods:

• min_cvar()
• normalize_weights()
__init__(returns, weight_bounds=(0, 1))
Parameters: returns (pd.DataFrame) – asset historical returns weight_bounds (tuple, optional) – minimum and maximum weight of an asset, defaults to (0, 1). Must be changed to (-1, 1) for portfolios with shorting. For CVaR opt, this is not a hard boundary. TypeError – if returns is not a dataframe
min_cvar(s=10000, beta=0.95, random_state=None)

Find the portfolio weights that minimises the CVaR, via Monte Carlo sampling from the return distribution.

Parameters: s (int, optional) – number of bootstrap draws, defaults to 10000 beta (float, optional) – “significance level” (i. 1 - q), defaults to 0.95 random_state (int, optional) – seed for random sampling, defaults to None asset weights for the Sharpe-maximising portfolio dict

Caution

Currently, we have not implemented any performance function. If you would like to calculate the actual CVaR of the resulting portfolio, please import the function from objective_functions.

## Hierarchical Risk Parity (HRP)¶

Hierarchical Risk Parity is a novel portfolio optimisation method developed by Marcos Lopez de Prado . Though a detailed explanation can be found in the linked paper, here is a rough overview of how HRP works:

1. From a universe of assets, form a distance matrix based on the correlation of the assets.
2. Using this distance matrix, cluster the assets into a tree via hierarchical clustering
3. Within each branch of the tree, form the minimum variance portfolio (normally between just two assets).
4. Iterate over each level, optimally combining the mini-portfolios at each node.

The advantages of this are that it does not require inversion of the covariance matrix as with traditional quadratic optimisers, and seems to produce diverse portfolios that perform well out of sample.

The hierarchical_risk_parity module implements the HRP portfolio from Marcos Lopez de Prado. It has the same interface as EfficientFrontier. Call the hrp_portfolio() method to generate a portfolio.

The code has been reproduced with modification from Lopez de Prado (2016).

class pypfopt.hierarchical_risk_parity.HRPOpt(returns)

A HRPOpt object (inheriting from BaseOptimizer) constructs a hierarchical risk parity portfolio.

Instance variables:

• Inputs
• returns
• Output: weights

Public methods:

• hrp_portfolio()
__init__(returns)
Parameters: returns (pd.DataFrame) – asset historical returns TypeError – if returns is not a dataframe
hrp_portfolio()

Construct a hierarchical risk parity portfolio

Returns: weights for the HRP portfolio dict

## The Critical Line Algorithm¶

This is a robust alternative to the quadratic solver used to find mean-variance optimal portfolios, that is especially advantageous when we apply linear inequalities. Unlike generic quadratic optimisers, the CLA is specially designed for portfolio optimisation. It is guaranteed to converge after a certain number of iterations, and can efficiently derive the entire efficient frontier.

Tip

In general, unless you have specific requirements e.g you would like to efficiently compute the entire efficient frontier for plotting, I would go with the standard EfficientFrontier optimiser.

I am most grateful to Marcos López de Prado and David Bailey for providing the implementation . Permission for its distribution has been received by email. It has been modified such that it has the same API, though as of v0.5.0 we only support max_sharpe() and min_volatility().

The cla module houses the CLA class, which generates optimal portfolios using the Critical Line Algorithm as implemented by Marcos Lopez de Prado and David Bailey.

class pypfopt.cla.CLA(expected_returns, cov_matrix, weight_bounds=(0, 1))
__init__(expected_returns, cov_matrix, weight_bounds=(0, 1))
Parameters: expected_returns (pd.Series, list, np.ndarray) – expected returns for each asset. Set to None if optimising for volatility only. cov_matrix (pd.DataFrame or np.array) – covariance of returns for each asset weight_bounds (tuple (float, float) or (list/ndarray, list/ndarray)) – minimum and maximum weight of an asset, defaults to (0, 1). Must be changed to (-1, 1) for portfolios with shorting. TypeError – if expected_returns is not a series, list or array TypeError – if cov_matrix is not a dataframe or array
efficient_frontier(points=100)

Efficiently compute the entire efficient frontier

Parameters: points (int, optional) – rough number of points to evaluate, defaults to 100 ValueError – if weights have not been computed return list, std list, weight list (float list, float list, np.ndarray list)
max_sharpe()

Get the max Sharpe ratio portfolio

min_volatility()

Get the minimum variance solution

portfolio_performance(verbose=False, risk_free_rate=0.02)

After optimising, calculate (and optionally print) the performance of the optimal portfolio. Currently calculates expected return, volatility, and the Sharpe ratio.

Parameters: verbose (bool, optional) – whether performance should be printed, defaults to False risk_free_rate (float, optional) – risk-free rate of borrowing/lending, defaults to 0.02 ValueError – if weights have not been calcualted yet expected return, volatility, Sharpe ratio. (float, float, float)

Please note that this is quite different to implementing Custom objectives, because in that case we are still using the same quadratic optimiser. However, HRP and CVaR optimisation have a fundamentally different optimisation method. In general, these are much more difficult to code up compared to custom objective functions.

To implement a custom optimiser that is compatible with the rest of PyPortfolioOpt, just extend BaseOptimizer (or BaseScipyOptimizer if you want to use scipy.optimize), both of which can be found in base_optimizer.py. This gives you access to utility methods like clean_weights(), as well as making sure that any output is compatible with portfolio_performance() and post-processing methods.

The base_optimizer module houses the parent classes BaseOptimizer and BaseScipyOptimizer, from which all optimisers will inherit. The later is for optimisers that use the scipy solver. Additionally, we define a general utility function portfolio_performance to evaluate return and risk for a given set of portfolio weights.

class pypfopt.base_optimizer.BaseOptimizer(n_assets, tickers=None)
__init__(n_assets, tickers=None)
Parameters: n_assets (int) – number of assets tickers (list) – name of assets
clean_weights(cutoff=0.0001, rounding=5)

Helper method to clean the raw weights, setting any weights whose absolute values are below the cutoff to zero, and rounding the rest.

Parameters: cutoff (float, optional) – the lower bound, defaults to 1e-4 rounding (int, optional) – number of decimal places to round the weights, defaults to 5. Set to None if rounding is not desired. asset weights dict
set_weights(weights)

Utility function to set weights.

Parameters: weights (dict) – {ticker: weight} dictionary
class pypfopt.base_optimizer.BaseScipyOptimizer(n_assets, tickers=None, weight_bounds=(0, 1))
__init__(n_assets, tickers=None, weight_bounds=(0, 1))
Parameters: weight_bounds (tuple, optional) – minimum and maximum weight of an asset, defaults to (0, 1). Must be changed to (-1, 1) for portfolios with shorting.
_make_valid_bounds(test_bounds)

Private method: process input bounds into a form acceptable by scipy.optimize, and check the validity of said bounds.

Parameters: test_bounds (tuple) – minimum and maximum weight of an asset ValueError – if test_bounds is not a tuple of length two. ValueError – if the lower bound is too high a tuple of bounds, e.g ((0, 1), (0, 1), (0, 1) …) tuple of tuples