# Black-Litterman Allocation¶

The Black-Litterman (BL) model [1] takes a Bayesian approach to asset allocation. Specifically, it combines a prior estimate of returns (for example, the market-implied returns) with views on certain assets, to produce a posterior estimate of expected returns. The advantages of this are:

• You can provide views on only a subset of assets and BL will meaningfully propagate it, taking into account the covariance with other assets.
• You can provide confidence in your views.
• Using Black-Litterman posterior returns results in much more stable portfolios than using mean-historical return.

Essentially, Black-Litterman treats the vector of expected returns itself as a quantity to be estimated. The Black-Litterman formula is given below:

$E(R) = [(\tau \Sigma)^{-1} + P^T \Omega^{-1} P]^{-1}[(\tau \Sigma)^{-1} \Pi + P^T \Omega^{-1} Q]$
• $$E(R)$$ is a Nx1 vector of expected returns, where N is the number of assets.
• $$Q$$ is a Kx1 vector of views.
• $$P$$ is the KxN picking matrix which maps views to the universe of assets. Essentially, it tells the model which view corresponds to which asset(s).
• $$\Omega$$ is the KxK uncertainty matrix of views.
• $$\Pi$$ is the Nx1 vector of prior expected returns.
• $$\Sigma$$ is the NxN covariance matrix of asset returns (as always)
• $$\tau$$ is a scalar tuning constant.

Though the formula appears to be quite unwieldy, it turns out that the formula simply represents a weighted average between the prior estimate of returns and the views, where the weighting is determined by the confidence in the views and the parameter $$\tau$$.

Similarly, we can calculate a posterior estimate of the covariance matrix:

$\hat{\Sigma} = \Sigma + [(\tau \Sigma)^{-1} + P^T \Omega^{-1} P]^{-1}$

Though the algorithm is relatively simple, BL proved to be a challenge from a software engineering perspective because it’s not quite clear how best to fit it into PyPortfolioOpt’s API. The full discussion can be found on a Github issue thread, but I ultimately decided that though BL is not technically an optimizer, it didn’t make sense to split up its methods into expected_returns or risk_models. I have thus made it an independent module and owing to the comparatively extensive theory, have given it a dedicated documentation page. I’d like to thank Felipe Schneider for his multiple contributions to the Black-Litterman implementation. A full example of its usage, including the acquistion of market cap data for free, please refer to the cookbook recipe.

Tip

Thomas Kirschenmann has built a neat interactive Black-Litterman tool on top of PyPortfolioOpt, which allows you to visualise BL outputs and compare optimization objectives.

## Priors¶

You can think of the prior as the “default” estimate, in the absence of any information. Black and Litterman (1991) [2] provide the insight that a natural choice for this prior is the market’s estimate of the return, which is embedded into the market capitalisation of the asset.

Every asset in the market portfolio contributes a certain amount of risk to the portfolio. Standard theory suggests that investors must be compensated for the risk that they take, so we can attribute to each asset an expected compensation (i.e prior estimate of returns). This is quantified by the market-implied risk premium, which is the market’s excess return divided by its variance:

$\delta = \frac{R-R_f}{\sigma^2}$

To calculate the market-implied returns, we then use the following formula:

$\Pi = \delta \Sigma w_{mkt}$

Here, $$w_{mkt}$$ denotes the market-cap weights. This formula is calculating the total amount of risk contributed by an asset and multiplying it with the market price of risk, resulting in the market-implied returns vector $$\Pi$$. We can use PyPortfolioOpt to calculate this as follows:

from pypfopt import black_litterman, risk_models

"""
cov_matrix is a NxN sample covariance matrix
mcaps is a dict of market caps
market_prices is a series of S&P500 prices
"""
delta = black_litterman.market_implied_risk_aversion(market_prices)
prior = black_litterman.market_implied_prior_returns(mcaps, delta, cov_matrix)


There is nothing stopping you from using any prior you see fit (but it must have the same dimensionality as the universe). If you think that the mean historical returns are a good prior, you could go with that. But a significant body of research shows that mean historical returns are a completely uninformative prior.

Note

You don’t technically have to provide a prior estimate to the Black-Litterman model. This is particularly useful if your views (and confidences) were generated by some proprietary model, in which case BL is essentially a clever way of mixing your views.

## Views¶

In the Black-Litterman model, users can either provide absolute or relative views. Absolute views are statements like: “AAPL will return 10%” or “XOM will drop 40%”. Relative views, on the other hand, are statements like “GOOG will outperform FB by 3%”.

These views must be specified in the vector $$Q$$ and mapped to the asset universe via the picking matrix $$P$$. A brief example of this is shown below, though a comprehensive guide is given by Idzorek. Let’s say that our universe is defined by the ordered list: SBUX, GOOG, FB, AAPL, BAC, JPM, T, GE, MSFT, XOM. We want to represent four views on these 10 assets, two absolute and two relative:

1. SBUX will drop 20% (absolute)
2. MSFT will rise by 5% (absolute)
3. GOOG outperforms FB by 10%
4. BAC and JPM will outperform T and GE by 15%

The corresponding views vector is formed by taking the numbers above and putting them into a column:

Q = np.array([-0.20, 0.05, 0.10, 0.15]).reshape(-1, 1)


The picking matrix is more interesting. Remember that its role is to link the views (which mention 8 assets) to the universe of 10 assets. Arguably, this is the most important part of the model because it is what allows us to propagate our expectations (and confidences in expectations) into the model:

P = np.array(
[
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 1, -1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0.5, 0.5, -0.5, -0.5, 0, 0],
]
)


A brief explanation of the above:

• Each view has a corresponding row in the picking matrix (the order matters)
• Absolute views have a single 1 in the column corresponding to the ticker’s order in the universe.
• Relative views have a positive number in the nominally outperforming asset columns and a negative number in the nominally underperforming asset columns. The numbers in each row should sum up to 0.

PyPortfolioOpt provides a helper method for inputting absolute views as either a dict or pd.Series – if you have relative views, you must build your picking matrix manually:

from pypfopt.black_litterman import BlackLittermanModel

viewdict = {"AAPL": 0.20, "BBY": -0.30, "BAC": 0, "SBUX": -0.2, "T": 0.15}
bl = BlackLittermanModel(cov_matrix, absolute_views=viewdict)


## Confidence matrix and tau¶

The confidence matrix is a diagonal covariance matrix containing the variances of each view. One heuristic for calculating $$\Omega$$ is to say that is proportional to the variance of the priors. This is reasonable - quantities that move around a lot are harder to forecast! Hence PyPortfolioOpt does not require you to input a confidence matrix, and defaults to:

$\Omega = \tau * P \Sigma P^T$

Alternatively, we provide an implementation of Idzorek’s method [1]. This allows you to specify your view uncertainties as percentage confidences. To use this, choose omega="idzorek" and pass a list of confidences (from 0 to 1) into the view_confidences parameter.

You are of course welcome to provide your own estimate. This is particularly applicable if your views are the output of some statistical model, which may also provide the view uncertainty.

Another parameter that controls the relative weighting of the priors views is $$\tau$$. There is a lot to be said about tuning this parameter, with many contradictory rules of thumb. Indeed, there has been an entire paper written on it [3]. We choose the sensible default $$\tau = 0.05$$.

Note

If you use the default estimate of $$\Omega$$, or omega="idzorek", it turns out that the value of $$\tau$$ does not matter. This is a consequence of the mathematics: the $$\tau$$ cancels in the matrix multiplications.

## Output of the BL model¶

The BL model outputs posterior estimates of the returns and covariance matrix. The default suggestion in the literature is to then input these into an optimizer (see General Efficient Frontier). A quick alternative, which is quite useful for debugging, is to calculate the weights implied by the returns vector [4]. It is actually the reverse of the procedure we used to calculate the returns implied by the market weights.

$w = (\delta \Sigma)^{-1} E(R)$

In PyPortfolioOpt, this is available under BlackLittermanModel.bl_weights(). Because the BlackLittermanModel class inherits from BaseOptimizer, this follows the same API as the EfficientFrontier objects:

from pypfopt import black_litterman
from pypfopt.black_litterman import BlackLittermanModel
from pypfopt.efficient_frontier import EfficientFrontier

viewdict = {"AAPL": 0.20, "BBY": -0.30, "BAC": 0, "SBUX": -0.2, "T": 0.15}
bl = BlackLittermanModel(cov_matrix, absolute_views=viewdict)

rets = bl.bl_returns()
ef = EfficientFrontier(rets, cov_matrix)

# OR use return-implied weights
delta = black_litterman.market_implied_risk_aversion(market_prices)
bl.bl_weights(delta)
weights = bl.clean_weights()


## Documentation reference¶

The black_litterman module houses the BlackLittermanModel class, which generates posterior estimates of expected returns given a prior estimate and user-supplied views. In addition, two utility functions are defined, which calculate:

• market-implied prior estimate of returns
• market-implied risk-aversion parameter
class pypfopt.black_litterman.BlackLittermanModel(cov_matrix, pi=None, absolute_views=None, Q=None, P=None, omega=None, view_confidences=None, tau=0.05, risk_aversion=1, **kwargs)[source]

A BlackLittermanModel object (inheriting from BaseOptimizer) contains requires a specific input format, specifying the prior, the views, the uncertainty in views, and a picking matrix to map views to the asset universe. We can then compute posterior estimates of returns and covariance. Helper methods have been provided to supply defaults where possible.

Instance variables:

• Inputs:

• cov_matrix - np.ndarray
• n_assets - int
• tickers - str list
• Q - np.ndarray
• P - np.ndarray
• pi - np.ndarray
• omega - np.ndarray
• tau - float
• Output:

• posterior_rets - pd.Series
• posterior_cov - pd.DataFrame
• weights - np.ndarray

Public methods:

• default_omega() - view uncertainty proportional to asset variance
• idzorek_method() - convert views specified as percentages into BL uncertainties
• bl_returns() - posterior estimate of returns
• bl_cov() - posterior estimate of covariance
• bl_weights() - weights implied by posterior returns
• portfolio_performance() calculates the expected return, volatility and Sharpe ratio for the allocated portfolio.
• set_weights() creates self.weights (np.ndarray) from a weights dict
• clean_weights() rounds the weights and clips near-zeros.
• save_weights_to_file() saves the weights to csv, json, or txt.
__init__(cov_matrix, pi=None, absolute_views=None, Q=None, P=None, omega=None, view_confidences=None, tau=0.05, risk_aversion=1, **kwargs)[source]
Parameters: cov_matrix (pd.DataFrame or np.ndarray) – NxN covariance matrix of returns pi (np.ndarray, pd.Series, optional) – Nx1 prior estimate of returns, defaults to None. If pi=”market”, calculate a market-implied prior (requires market_caps to be passed). If pi=”equal”, use an equal-weighted prior. absolute_views (pd.Series or dict, optional) – a colleciton of K absolute views on a subset of assets, defaults to None. If this is provided, we do not need P, Q. Q (np.ndarray or pd.DataFrame, optional) – Kx1 views vector, defaults to None P (np.ndarray or pd.DataFrame, optional) – KxN picking matrix, defaults to None omega (np.ndarray or Pd.DataFrame, or string, optional) – KxK view uncertainty matrix (diagonal), defaults to None Can instead pass “idzorek” to use Idzorek’s method (requires you to pass view_confidences). If omega=”default” or None, we set the uncertainty proportional to the variance. view_confidences (np.ndarray, pd.Series, list, optional) – Kx1 vector of percentage view confidences (between 0 and 1), required to compute omega via Idzorek’s method. tau (float, optional) – the weight-on-views scalar (default is 0.05) risk_aversion (positive float, optional) – risk aversion parameter, defaults to 1 market_caps (np.ndarray, pd.Series, optional) – (kwarg) market caps for the assets, required if pi=”market” risk_free_rate (float, defaults to 0.02) – (kwarg) risk_free_rate is needed in some methods

Caution

You must specify the covariance matrix and either absolute views or both Q and P, except in the special case where you provide exactly one view per asset, in which case P is inferred.

bl_cov()[source]

Calculate the posterior estimate of the covariance matrix, given views on some assets. Based on He and Litterman (2002). It is assumed that omega is diagonal. If this is not the case, please manually set omega_inv.

Returns: posterior covariance matrix pd.DataFrame
bl_returns()[source]

Calculate the posterior estimate of the returns vector, given views on some assets.

Returns: posterior returns vector pd.Series
bl_weights(risk_aversion=None)[source]

Compute the weights implied by the posterior returns, given the market price of risk. Technically this can be applied to any estimate of the expected returns, and is in fact a special case of mean-variance optimization

$w = (\delta \Sigma)^{-1} E(R)$
Parameters: risk_aversion (positive float, optional) – risk aversion parameter, defaults to 1 asset weights implied by returns OrderedDict
static default_omega(cov_matrix, P, tau)[source]

If the uncertainty matrix omega is not provided, we calculate using the method of He and Litterman (1999), such that the ratio omega/tau is proportional to the variance of the view portfolio.

Returns: KxK diagonal uncertainty matrix np.ndarray
static idzorek_method(view_confidences, cov_matrix, pi, Q, P, tau, risk_aversion=1)[source]

Use Idzorek’s method to create the uncertainty matrix given user-specified percentage confidences. We use the closed-form solution described by Jay Walters in The Black-Litterman Model in Detail (2014).

Parameters: view_confidences (np.ndarray, pd.Series, list,, optional) – Kx1 vector of percentage view confidences (between 0 and 1), required to compute omega via Idzorek’s method. KxK diagonal uncertainty matrix np.ndarray
optimize(risk_aversion=None)[source]

Alias for bl_weights for consistency with other methods.

portfolio_performance(verbose=False, risk_free_rate=0.02)[source]

After optimising, calculate (and optionally print) the performance of the optimal portfolio. Currently calculates expected return, volatility, and the Sharpe ratio. This method uses the BL posterior returns and covariance matrix.

Parameters: verbose (bool, optional) – whether performance should be printed, defaults to False risk_free_rate (float, optional) – risk-free rate of borrowing/lending, defaults to 0.02. The period of the risk-free rate should correspond to the frequency of expected returns. ValueError – if weights have not been calcualted yet expected return, volatility, Sharpe ratio. (float, float, float)
pypfopt.black_litterman.market_implied_prior_returns(market_caps, risk_aversion, cov_matrix, risk_free_rate=0.02)[source]

Compute the prior estimate of returns implied by the market weights. In other words, given each asset’s contribution to the risk of the market portfolio, how much are we expecting to be compensated?

$\Pi = \delta \Sigma w_{mkt}$
Parameters: market_caps ({ticker: cap} dict or pd.Series) – market capitalisations of all assets risk_aversion (positive float) – risk aversion parameter cov_matrix (pd.DataFrame) – covariance matrix of asset returns risk_free_rate (float, optional) – risk-free rate of borrowing/lending, defaults to 0.02. You should use the appropriate time period, corresponding to the covariance matrix. prior estimate of returns as implied by the market caps pd.Series
pypfopt.black_litterman.market_implied_risk_aversion(market_prices, frequency=252, risk_free_rate=0.02)[source]

Calculate the market-implied risk-aversion parameter (i.e market price of risk) based on market prices. For example, if the market has excess returns of 10% a year with 5% variance, the risk-aversion parameter is 2, i.e you have to be compensated 2x the variance.

$\delta = \frac{R - R_f}{\sigma^2}$
Parameters: market_prices (pd.Series with DatetimeIndex.) – the (daily) prices of the market portfolio, e.g SPY. frequency (int, optional) – number of time periods in a year, defaults to 252 (the number of trading days in a year) risk_free_rate (float, optional) – risk-free rate of borrowing/lending, defaults to 0.02. The period of the risk-free rate should correspond to the frequency of expected returns. TypeError – if market_prices cannot be parsed market-implied risk aversion float

## References¶

 [1] (1, 2) Idzorek T. A step-by-step guide to the Black-Litterman model: Incorporating user-specified confidence levels. In: Forecasting Expected Returns in the Financial Markets. Elsevier Ltd; 2007. p. 17–38.
 [2] Black, F; Litterman, R. Combining investor views with market equilibrium. The Journal of Fixed Income, 1991.
 [3] Walters, Jay, The Factor Tau in the Black-Litterman Model (October 9, 2013). Available at SSRN: https://ssrn.com/abstract=1701467 or http://dx.doi.org/10.2139/ssrn.1701467
 [4] Walters J. The Black-Litterman Model in Detail (2014). SSRN Electron J.;(February 2007):1–65.