BlackLitterman Allocation¶
The BlackLitterman (BL) model [1] takes a Bayesian approach to asset allocation. Specifically, it combines a prior estimate of returns (canonically, the marketimplied returns) with views on certain assets, to produce a posterior estimate of expected returns. The advantages of this are:
 You can provide views on only a subset of assets and BL will meaningfully propagate it, taking into account the covariance with other assets.
 You can provide confidence in your views.
 Using BlackLitterman posterior returns results in much more stable portfolios than using meanhistorical return.
Essentially, BlackLitterman treats the vector of expected returns itself as a quantity to be estimated. The BlackLitterman formula is given below:
 \(E(R)\) is a Nx1 vector of expected returns, where N is the number of assets.
 \(Q\) is a Kx1 vector of views.
 \(P\) is the KxN picking matrix which maps views to the universe of assets. Essentially, it tells the model which view corresponds to which asset(s).
 \(\Omega\) is the KxK uncertainty matrix of views.
 \(\Pi\) is the Nx1 vector of prior expected returns.
 \(\Sigma\) is the NxN covariance matrix of asset returns (as always)
 \(\tau\) is a scalar tuning constant.
Though the formula appears to be quite unwieldy, it turns out that the formula simply represents a weighted average between the prior estimate of returns and the views, where the weighting is determined by the confidence in the views and the parameter \(\tau\).
Similarly, we can calculate a posterior estimate of the covariance matrix:
Though the algorithm is relatively simple, BL proved to be a challenge from a software engineering perspective because it’s not quite clear how best to fit it into PyPortfolioOpt’s API. The full discussion can be found on a Github issue thread, but I ultimately decided that though BL is not technically an optimiser, it didn’t make sense to split up its methods into expected_returns or risk_models. I have thus made it an independent module and owing to the comparatively extensive theory, have given it a dedicated documentation page. I’d like to thank Felipe Schneider for his multiple contributions to the BlackLitterman implementation. A full example of its usage, including the acquistion of market cap data for free, please refer to the cookbook recipe.
Caution
Our implementation of BlackLitterman makes frequent use of the fact that python 3.6+ dictionaries
remain ordered. It is still possible to use python 3.5 but you will have to construct the BL inputs
explicitly (Q
, P
, omega
).
Priors¶
You can think of the prior as the “default” estimate, in the absence of any information. Black and Litterman (1991) [2] provide the insight that a natural choice for this prior is the market’s estimate of the return, which is embedded into the market capitalisation of the asset.
Every asset in the market portfolio contributes a certain amount of risk to the portfolio. Standard theory suggests that investors must be compensated for the risk that they take, so we can attribute to each asset an expected compensation (i.e prior estimate of returns). This is quantified by the marketimplied risk premium, which is the market’s excess return divided by its variance:
To calculate the marketimplied returns, we then use the following formula:
Here, \(w_{mkt}\) denotes the marketcap weights. This formula is calculating the total amount of risk contributed by an asset and multiplying it with the market price of risk, resulting in the marketimplied returns vector \(\Pi\). We can use PyPortfolioOpt to calculate this as follows:
from pypfopt import black_litterman, risk_models
"""
cov_matrix is a NxN sample covariance matrix
mcaps is a dict of market caps
market_prices is a series of S&P500 prices
"""
delta = black_litterman.market_implied_risk_aversion(market_prices)
prior = black_litterman.market_implied_prior_returns(mcaps, delta, cov_matrix)
There is nothing stopping you from using any prior you see fit (but it must have the same dimensionality as the universe). If you think that the mean historical returns are a good prior, you could go with that. But a significant body of research shows that mean historical returns are a completely uninformative prior.
Note
You don’t technically have to provide a prior estimate to the BlackLitterman model. This is particularly useful if your views (and confidences) were generated by some proprietary model, in which case BL is essentially a clever way of mixing your views.
Views¶
In the BlackLitterman model, users can either provide absolute or relative views. Absolute views are statements like: “AAPL will return 10%” or “XOM will drop 40%”. Relative views, on the other hand, are statements like “GOOG will outperform FB by 3%”.
These views must be specified in the vector \(Q\) and mapped to the asset universe via the picking matrix \(P\). A brief example of this is shown below, though a comprehensive guide is given by Idzorek. Let’s say that our universe is defined by the ordered list: SBUX, GOOG, FB, AAPL, BAC, JPM, T, GE, MSFT, XOM. We want to represent four views on these 10 assets, two absolute and two relative:
 SBUX will drop 20% (absolute)
 MSFT will rise by 5% (absolute)
 GOOG outperforms FB by 10%
 BAC and JPM will outperform T and GE by 15%
The corresponding views vector is formed by taking the numbers above and putting them into a column:
Q = np.array([0.20, 0.05, 0.10, 0.15]).reshape(1, 1)
The picking matrix is more interesting. Remember that its role is to link the views (which mention 8 assets) to the universe of 10 assets. Arguably, this is the most important part of the model because it is what allows us to propagate our expectations (and confidences in expectations) into the model:
P = np.array(
[
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0.5, 0.5, 0.5, 0.5, 0, 0],
]
)
A brief explanation of the above:
 Each view has a corresponding row in the picking matrix (the order matters)
 Absolute views have a single 1 in the column corresponding to the ticker’s order in the universe.
 Relative views have a positive number in the nominally outperforming asset columns and a negative number in the nominally underperforming asset columns. The numbers in each row should sum up to 0.
PyPortfolioOpt provides a helper method for inputting absolute views as either a dict
or pd.Series
–
if you have relative views, you must build your picking matrix manually:
from pypfopt.black_litterman import BlackLittermanModel
viewdict = {"AAPL": 0.20, "BBY": 0.30, "BAC": 0, "SBUX": 0.2, "T": 0.15}
bl = BlackLittermanModel(cov_matrix, absolute_views=viewdict)
Confidence matrix and tau¶
The confidence matrix is a diagonal covariance matrix containing the variances of each view. One heuristic for calculating \(\Omega\) is to say that is proportional to the variance of the priors. This is reasonable  quantities that move around a lot are harder to forecast! Hence PyPortfolioOpt does not require you to input a confidence matrix, and defaults to:
Alternatively, we provide an implementation of Idzorek’s method [1]. This allows you to specify your view uncertainties as
percentage confidences. To use this, choose omega="idzorek"
and pass a list of confidences (from 0 to 1) into the view_confidences
parameter.
You are of course welcome to provide your own estimate. This is particularly applicable if your views are the output of some statistical model, which may also provide the view uncertainty.
Another parameter that controls the relative weighting of the priors views is \(\tau\). There is a lot to be said about tuning this parameter, with many contradictory rules of thumb. Indeed, there has been an entire paper written on it [3]. We choose the sensible default \(\tau = 0.05\).
Note
If you use the default estimate of \(\Omega\), or omega="idzorek"
, it turns out that the value of \(\tau\) does not matter. This
is a consequence of the mathematics: the \(\tau\) cancels in the matrix multiplications.
Output of the BL model¶
The BL model outputs posterior estimates of the returns and covariance matrix. The default suggestion in the literature is to then input these into an optimiser (see Efficient Frontier Optimisation). A quick alternative, which is quite useful for debugging, is to calculate the weights implied by the returns vector [4]. It is actually the reverse of the procedure we used to calculate the returns implied by the market weights.
In PyPortfolioOpt, this is available under BlackLittermanModel.bl_weights()
. Because the BlackLittermanModel
class
inherits from BaseOptimizer
, this follows the same API as the EfficientFrontier
objects:
from pypfopt import black_litterman
from pypfopt.black_litterman import BlackLittermanModel
from pypfopt.efficient_frontier import EfficientFrontier
viewdict = {"AAPL": 0.20, "BBY": 0.30, "BAC": 0, "SBUX": 0.2, "T": 0.15}
bl = BlackLittermanModel(cov_matrix, absolute_views=viewdict)
rets = bl.bl_returns()
ef = EfficientFrontier(rets, cov_matrix)
# OR use returnimplied weights
delta = black_litterman.market_implied_risk_aversion(market_prices)
bl.bl_weights(delta)
weights = bl.clean_weights()
Documentation reference¶
The black_litterman
module houses the BlackLittermanModel class, which
generates posterior estimates of expected returns given a prior estimate and usersupplied
views. In addition, two utility functions are defined, which calculate:
 marketimplied prior estimate of returns
 marketimplied riskaversion parameter

class
pypfopt.black_litterman.
BlackLittermanModel
(cov_matrix, pi=None, absolute_views=None, Q=None, P=None, omega=None, view_confidences=None, tau=0.05, risk_aversion=1, **kwargs)[source]¶ A BlackLittermanModel object (inheriting from BaseOptimizer) contains requires a specific input format, specifying the prior, the views, the uncertainty in views, and a picking matrix to map views to the asset universe. We can then compute posterior estimates of returns and covariance. Helper methods have been provided to supply defaults where possible.
Instance variables:
Inputs:
cov_matrix
 np.ndarrayn_assets
 inttickers
 str listQ
 np.ndarrayP
 np.ndarraypi
 np.ndarrayomega
 np.ndarraytau
 float
Output:
posterior_rets
 pd.Seriesposterior_cov
 pd.DataFrameweights
 np.ndarray
Public methods:
default_omega()
 view uncertainty proportional to asset varianceidzorek_method()
 convert views specified as percentages into BL uncertaintiesbl_returns()
 posterior estimate of returnsbl_cov()
 posterior estimate of covariancebl_weights()
 weights implied by posterior returnsportfolio_performance()
calculates the expected return, volatility and Sharpe ratio for the allocated portfolio.set_weights()
creates self.weights (np.ndarray) from a weights dictclean_weights()
rounds the weights and clips nearzeros.save_weights_to_file()
saves the weights to csv, json, or txt.

__init__
(cov_matrix, pi=None, absolute_views=None, Q=None, P=None, omega=None, view_confidences=None, tau=0.05, risk_aversion=1, **kwargs)[source]¶ Parameters:  cov_matrix (pd.DataFrame or np.ndarray) – NxN covariance matrix of returns
 pi (np.ndarray, pd.Series, optional) – Nx1 prior estimate of returns, defaults to None. If pi=”market”, calculate a marketimplied prior (requires market_caps to be passed). If pi=”equal”, use an equalweighted prior.
 absolute_views (pd.Series or dict, optional) – a colleciton of K absolute views on a subset of assets, defaults to None. If this is provided, we do not need P, Q.
 Q (np.ndarray or pd.DataFrame, optional) – Kx1 views vector, defaults to None
 P (np.ndarray or pd.DataFrame, optional) – KxN picking matrix, defaults to None
 omega (np.ndarray or Pd.DataFrame, or string, optional) – KxK view uncertainty matrix (diagonal), defaults to None Can instead pass “idzorek” to use Idzorek’s method (requires you to pass view_confidences). If omega=”default” or None, we set the uncertainty proportional to the variance.
 view_confidences (np.ndarray, pd.Series, list, optional) – Kx1 vector of percentage view confidences (between 0 and 1), required to compute omega via Idzorek’s method.
 tau (float, optional) – the weightonviews scalar (default is 0.05)
 risk_aversion (positive float, optional) – risk aversion parameter, defaults to 1
 market_caps (np.ndarray, pd.Series, optional) – (kwarg) market caps for the assets, required if pi=”market”
 risk_free_rate (float, defaults to 0.02) – (kwarg) risk_free_rate is needed in some methods
Caution
You must specify the covariance matrix and either absolute views or both Q and P, except in the special case where you provide exactly one view per asset, in which case P is inferred.

bl_cov
()[source]¶ Calculate the posterior estimate of the covariance matrix, given views on some assets. Based on He and Litterman (2002). It is assumed that omega is diagonal. If this is not the case, please manually set omega_inv.
Returns: posterior covariance matrix Return type: pd.DataFrame

bl_returns
()[source]¶ Calculate the posterior estimate of the returns vector, given views on some assets.
Returns: posterior returns vector Return type: pd.Series

bl_weights
(risk_aversion=None)[source]¶ Compute the weights implied by the posterior returns, given the market price of risk. Technically this can be applied to any estimate of the expected returns, and is in fact a special case of efficient frontier optimisation.
\[w = (\delta \Sigma)^{1} E(R)\]Parameters: risk_aversion (positive float, optional) – risk aversion parameter, defaults to 1 Returns: asset weights implied by returns Return type: OrderedDict

static
default_omega
(cov_matrix, P, tau)[source]¶ If the uncertainty matrix omega is not provided, we calculate using the method of He and Litterman (1999), such that the ratio omega/tau is proportional to the variance of the view portfolio.
Returns: KxK diagonal uncertainty matrix Return type: np.ndarray

static
idzorek_method
(view_confidences, cov_matrix, pi, Q, P, tau, risk_aversion=1)[source]¶ Use Idzorek’s method to create the uncertainty matrix given userspecified percentage confidences. We use the closedform solution described by Jay Walters in The BlackLitterman Model in Detail (2014).
Parameters: view_confidences (np.ndarray, pd.Series, list,, optional) – Kx1 vector of percentage view confidences (between 0 and 1), required to compute omega via Idzorek’s method. Returns: KxK diagonal uncertainty matrix Return type: np.ndarray

portfolio_performance
(verbose=False, risk_free_rate=0.02)[source]¶ After optimising, calculate (and optionally print) the performance of the optimal portfolio. Currently calculates expected return, volatility, and the Sharpe ratio. This method uses the BL posterior returns and covariance matrix.
Parameters:  verbose (bool, optional) – whether performance should be printed, defaults to False
 risk_free_rate (float, optional) – riskfree rate of borrowing/lending, defaults to 0.02. The period of the riskfree rate should correspond to the frequency of expected returns.
Raises: ValueError – if weights have not been calcualted yet
Returns: expected return, volatility, Sharpe ratio.
Return type: (float, float, float)

class
pypfopt.black_litterman.
BlackLittermanModel
(cov_matrix, pi=None, absolute_views=None, Q=None, P=None, omega=None, view_confidences=None, tau=0.05, risk_aversion=1, **kwargs)[source] A BlackLittermanModel object (inheriting from BaseOptimizer) contains requires a specific input format, specifying the prior, the views, the uncertainty in views, and a picking matrix to map views to the asset universe. We can then compute posterior estimates of returns and covariance. Helper methods have been provided to supply defaults where possible.
Instance variables:
Inputs:
cov_matrix
 np.ndarrayn_assets
 inttickers
 str listQ
 np.ndarrayP
 np.ndarraypi
 np.ndarrayomega
 np.ndarraytau
 float
Output:
posterior_rets
 pd.Seriesposterior_cov
 pd.DataFrameweights
 np.ndarray
Public methods:
default_omega()
 view uncertainty proportional to asset varianceidzorek_method()
 convert views specified as percentages into BL uncertaintiesbl_returns()
 posterior estimate of returnsbl_cov()
 posterior estimate of covariancebl_weights()
 weights implied by posterior returnsportfolio_performance()
calculates the expected return, volatility and Sharpe ratio for the allocated portfolio.set_weights()
creates self.weights (np.ndarray) from a weights dictclean_weights()
rounds the weights and clips nearzeros.save_weights_to_file()
saves the weights to csv, json, or txt.

bl_cov
()[source] Calculate the posterior estimate of the covariance matrix, given views on some assets. Based on He and Litterman (2002). It is assumed that omega is diagonal. If this is not the case, please manually set omega_inv.
Returns: posterior covariance matrix Return type: pd.DataFrame

bl_returns
()[source] Calculate the posterior estimate of the returns vector, given views on some assets.
Returns: posterior returns vector Return type: pd.Series

bl_weights
(risk_aversion=None)[source] Compute the weights implied by the posterior returns, given the market price of risk. Technically this can be applied to any estimate of the expected returns, and is in fact a special case of efficient frontier optimisation.
\[w = (\delta \Sigma)^{1} E(R)\]Parameters: risk_aversion (positive float, optional) – risk aversion parameter, defaults to 1 Returns: asset weights implied by returns Return type: OrderedDict

static
default_omega
(cov_matrix, P, tau)[source] If the uncertainty matrix omega is not provided, we calculate using the method of He and Litterman (1999), such that the ratio omega/tau is proportional to the variance of the view portfolio.
Returns: KxK diagonal uncertainty matrix Return type: np.ndarray

static
idzorek_method
(view_confidences, cov_matrix, pi, Q, P, tau, risk_aversion=1)[source] Use Idzorek’s method to create the uncertainty matrix given userspecified percentage confidences. We use the closedform solution described by Jay Walters in The BlackLitterman Model in Detail (2014).
Parameters: view_confidences (np.ndarray, pd.Series, list,, optional) – Kx1 vector of percentage view confidences (between 0 and 1), required to compute omega via Idzorek’s method. Returns: KxK diagonal uncertainty matrix Return type: np.ndarray

optimize
(risk_aversion=None)[source] Alias for bl_weights for consistency with other methods.

portfolio_performance
(verbose=False, risk_free_rate=0.02)[source] After optimising, calculate (and optionally print) the performance of the optimal portfolio. Currently calculates expected return, volatility, and the Sharpe ratio. This method uses the BL posterior returns and covariance matrix.
Parameters:  verbose (bool, optional) – whether performance should be printed, defaults to False
 risk_free_rate (float, optional) – riskfree rate of borrowing/lending, defaults to 0.02. The period of the riskfree rate should correspond to the frequency of expected returns.
Raises: ValueError – if weights have not been calcualted yet
Returns: expected return, volatility, Sharpe ratio.
Return type: (float, float, float)

pypfopt.black_litterman.
market_implied_prior_returns
(market_caps, risk_aversion, cov_matrix, risk_free_rate=0.02)[source]¶ Compute the prior estimate of returns implied by the market weights. In other words, given each asset’s contribution to the risk of the market portfolio, how much are we expecting to be compensated?
\[\Pi = \delta \Sigma w_{mkt}\]Parameters:  market_caps ({ticker: cap} dict or pd.Series) – market capitalisations of all assets
 risk_aversion (positive float) – risk aversion parameter
 cov_matrix (pd.DataFrame) – covariance matrix of asset returns
 risk_free_rate (float, optional) – riskfree rate of borrowing/lending, defaults to 0.02. You should use the appropriate time period, corresponding to the covariance matrix.
Returns: prior estimate of returns as implied by the market caps
Return type: pd.Series

pypfopt.black_litterman.
market_implied_risk_aversion
(market_prices, frequency=252, risk_free_rate=0.02)[source]¶ Calculate the marketimplied riskaversion parameter (i.e market price of risk) based on market prices. For example, if the market has excess returns of 10% a year with 5% variance, the riskaversion parameter is 2, i.e you have to be compensated 2x the variance.
\[\delta = \frac{R  R_f}{\sigma^2}\]Parameters:  market_prices (pd.Series with DatetimeIndex.) – the (daily) prices of the market portfolio, e.g SPY.
 frequency (int, optional) – number of time periods in a year, defaults to 252 (the number of trading days in a year)
 risk_free_rate (float, optional) – riskfree rate of borrowing/lending, defaults to 0.02. The period of the riskfree rate should correspond to the frequency of expected returns.
Raises: TypeError – if market_prices cannot be parsed
Returns: marketimplied risk aversion
Return type: float
References¶
[1]  (1, 2) Idzorek T. A stepbystep guide to the BlackLitterman model: Incorporating userspecified confidence levels. In: Forecasting Expected Returns in the Financial Markets. Elsevier Ltd; 2007. p. 17–38. 
[2]  Black, F; Litterman, R. Combining investor views with market equilibrium. The Journal of Fixed Income, 1991. 
[3]  Walters, Jay, The Factor Tau in the BlackLitterman Model (October 9, 2013). Available at SSRN: https://ssrn.com/abstract=1701467 or http://dx.doi.org/10.2139/ssrn.1701467 
[4]  Walters J. The BlackLitterman Model in Detail (2014). SSRN Electron J.;(February 2007):1–65. 