jankrepl / deepdow

Portfolio optimization with deep learning.
https://deepdow.readthedocs.io
Apache License 2.0
874 stars 136 forks source link

[Feature Request] Optimal Portfolio Allocation via Independent Component Analysis #119

Open kayuksel opened 2 years ago

kayuksel commented 2 years ago

IC-variance-parity portfolio: Factor-Risk-Parity Portfolio based on Maximally Independent Factors (via ICA) Optimal_Portfolio_ICA.pdf

jankrepl commented 2 years ago

Hey there!

I will try to read the paper when I have some spare time! Looks interesting. Thank you for the suggestion!

kayuksel commented 1 year ago

I've written a loss function here for maximizing the eigenvalue entropy or gini-index of a portfolio. Also, contains a PyTorch implementation of the probabilistic sharpe ratio. Let me know if you try.

def torch_cdf(x):
    neg_ones = x < 0
    x = x.abs()
    a1 = 0.319381530
    a2 = -0.356563782
    a3 = 1.781477937
    a4 = -1.821255978
    a5 = 1.330274429
    k = 1.0 / (1.0 + 0.2316419 * x)
    k2 = k * k
    k3 = k2 * k
    k4 = k3 * k
    k5 = k4 * k
    c = (a1 * k + a2 * k2 + a3 * k3 + a4 * k4 + a5 * k5)
    phi = 1.0 - c * (-x*x/2.0).exp() * 0.3989422804014327
    phi[neg_ones] = 1.0 - phi[neg_ones]
    return phi

def calculate_psr(rewards):
    mean, std = rewards.mean(dim=0), rewards.std(dim=0)
    rdiff = rewards - mean
    zscore = rdiff / std
    skew = (zscore**3).mean(dim=0)
    kurto = ((zscore**4).mean(dim=0) - 4) / 4
    sharpe = mean / std
    psr_in  = (1 - skew * sharpe + kurto * sharpe**2) / (len(rewards)-1)
    psr_out = torch_cdf(sharpe / psr_in.sqrt())
    psr_out[psr_out.isnan()] = 0.0
    return mean, std, psr_out    

def covv(X):
    D = X.shape[-1]
    mean = torch.mean(X, dim=-1).unsqueeze(-1)
    X = X - mean
    return 1/(D-1) * X @ X.transpose(-1, -2)

cov_matrix = covv(valid_data.T)
eigen_values, eigen_vectors = torch.linalg.eigh(cov_matrix)

def get_entropy(portfolio_weights):
    # calculate eigenvector weights
    eigen_vector_weights = eigen_vectors @ portfolio_weights.T
    rba = eigen_vector_weights ** 2
    rba = rba / rba.sum(dim=0, keepdim=True)
    # calculate entropy
    return -torch.nansum(rba * torch.log(rba), dim=0) / math.log(rba.shape[0])

std_vec = valid_data.std(dim=0)

def calculate_reward(weights, valid_data, train = False):
    #weights /= std_vec
    weights = weights / weights.abs().sum(dim=1).reshape(-1,1)
    rets = weights.matmul(valid_data.T)
    omg = rets.clamp(min=0.0).mean(dim=1) / rets.abs().mean(dim=1)
    if train: return -(calculate_psr(rets.T)[-1]*omg) * get_entropy(weights)
    return -(calculate_psr(rets.T)[-1]*omg)