robertmartin8 / PyPortfolioOpt

Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity
https://pyportfolioopt.readthedocs.io/
MIT License
4.38k stars 940 forks source link

Port portfolio optimisation #378

Closed qowiews closed 2 years ago

qowiews commented 2 years ago

Hi,

I am trying to implement a portfolio optimisation that minimises the TE of port vs BM. In that sense I want a cut of so I want the te to be less than or equal to 0.1^2.

code so far:

mu = expected_returns.mean_historical_return(prices)
S = risk_models.sample_cov(prices)
ef = EfficientFrontier(mu, S)
ef.add_objective((objective_functions.ex_post_tracking_error, historic_returns=rets, benchmark_returns=spy_rets)<=0.1**2)
ef.min_volatility()
phschiele commented 2 years ago

Hi @qowiews, since you only want to ensure that TE is <= 0.01, but not minimise it beyond that, you should look into .add_constraint() rather than .add_objective().

qowiews commented 2 years ago

Hi @phschiele,

So is impossible to use objective_functions.ex_post_tracking_error, historic_returns() In an .add_constraint()? or am I forced to make an def and a lambda function In the add_constraint()

another question is it possible to minimise the TE and at at the same time maximise the sharp.ratio ?

phschiele commented 2 years ago

@qowiews Yes, the lambda wrapper is required at the moment. Please find below a MWE:

import numpy as np

from pypfopt import EfficientFrontier
from pypfopt.expected_returns import returns_from_prices
from pypfopt.objective_functions import ex_post_tracking_error
from tests.utilities_for_tests import get_data

df = get_data()
rets = returns_from_prices(df).dropna()
bm_rets = rets.mean(axis=1)

mean_return = rets.mean(axis=0)
sample_cov_matrix = rets.cov()
ef = EfficientFrontier(mean_return, sample_cov_matrix)
print(ef.min_volatility())
>>> OrderedDict([('GOOG', 0.007909381931436), ('AAPL', 0.0306900454136316), ('FB', 0.0105068928339534),...

ef_te_constraint = EfficientFrontier(mean_return, sample_cov_matrix)
te_constraint = lambda x: ex_post_tracking_error(x, rets, bm_rets) <= 0.01
ef_te_constraint.add_constraint(te_constraint)
print(ef_te_constraint.min_volatility())
>>>> OrderedDict([('GOOG', 0.0265565609820285), ('AAPL', 0.0434402116734154), ('FB', 0.0336672290976972),...

As expected, the constraint pulls the weights closer to the benchmark, which in this example is an equal-weight portfolio.

Optimizing with respect to multiple objectives is not possible, but you can define a trade-off parameter, e.g. gamma, such that you can minimize -sharpe_ratio + gamma * te

qowiews commented 2 years ago

Hi Thanks,

I have two other questions:

  1. why are you stating that weights will be:

w = np.ones((len(mean_return),)) / len(mean_return)

  1. stating the TE constraint of 0.01, will this be interpreted as 0.01 std ?

    te_constraint = lambda x: ex_post_tracking_error(x, rets, bm_rets) <= 0.01

phschiele commented 2 years ago

Hi Thanks,

I have two other questions:

  1. why are you stating that weights will be:

w = np.ones((len(mean_return),)) / len(mean_return)

This was an unused line I accidentally copied from the tests. I have updated the example above.

  1. Stating the TE constraint of 0.01, will this be interpreted as 0.01 std ?

te_constraint = lambda x: ex_post_tracking_error(x, rets, bm_rets) <= 0.01

ex_post_tracking_error() returns the variance, i.e. the 0.01 refer to a standard deviation of 0.1.

qowiews commented 2 years ago

Hi again, Thanks, great help!