Closed LegrandNico closed 2 years ago
Funny you mention this as it's something vaguely on my radar!
I actually made a start porting the quickperms from the PALM toolbox (with the permission of Winkler/Smith) here: https://github.com/jameschapman19/scikit-perm/blob/main/skperm/permutation_tests/cca_permutation_test.py
Which would be a nice bonus with a ported version of permCCA (i.e. where the user can supply their own permutations based on exchangeability blocks).
I got code for the CCA part mentioned by @LegrandNico. The full implementation in python using cca_zoo. It's not packaged up but if people want to work form here, this is the snippet:
# permutation testing from
# https://github.com/andersonwinkler/PermCCA/blob/6098d35da79618588b8763c5b4a519438703dba4/permcca.m#L131-L164
# from cca_zoo.models import PMD
# n_permutation = 2
# rng = np.random.RandomState(42)
# lW, cnt = np.zeros(latent_dims), np.zeros(latent_dims)
# for i in range(n_permutation):
# print(f"Permutation {1 + i} / {n_permutation} ")
# if i == 0:
# X_perm = z_transitions
# Y_perm = z_mriq
# else:
# x_idx = rng.permutation(710)
# y_idx = rng.permutation(710)
# X_perm = z_transitions[x_idx]
# Y_perm = z_mriq[y_idx]
# for k in range(latent_dims):
# print(f"Mode {1 + k} of {latent_dims}")
# perm_model = PMD(c=trained_c,
# latent_dims=(latent_dims - k),
# max_iter=100)
# perm_model.fit(X_perm[:, k:], Y_perm[:, k:])
# r_perm = perm_model.train_correlations[0][1]
# print(r_perm)
# lWtmp = -1 * np.cumsum(np.log(1 - r_perm ** 2)[::-1])[::-1]
# print(lWtmp)
# lW[k] = lWtmp[0]
# if i == 0:
# lw1 = lW
# cnt = cnt + (lW >= lw1)
# punc = cnt / n_permutation
# pfwer = pd.DataFrame(punc).cummax().values
# print(punc)
# print(pfwer)
This is an incredibly lazy attempt.
Thank you @htwangtw , I think that will be really helpful.
I don't know how you want to integrate the permutation functionalities with the rest of the package. I can try to make something, but maybe will start with an example tutorial notebook see if we have everything running.
I would guess you could adapt @htwangtw's code to look something like https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.permutation_test_score.html
Could be rudimentary initially with the goal to get close to sklearn API which could run permutation tests in parallel
Just for completeness: There are two other packages that might also help?
The pyls package has implemented permutation tests: https://pyls.readthedocs.io/en/latest/index.html
And there's also the resample package: https://github.com/resample-project/resample
I've put a version of this in cca_zoo.model_selection._validation with an API that hooks into scikit-learn permutation_test_score.
It's not quite the same as what Winkler does for multiple latent dimensions (but it should be similar) but it works for 1 latent dimension. Should be able to build on this.
I'll add a proper example but it should work like:
from cca_zoo.model_selection import permutation_test_score
from cca_zoo.models import rCCA
import numpy as np
X = np.random.rand(100, 10)
Y = np.random.rand(100, 8)
model = rCCA(c=[0.1, 0.3])
permutation_test_score(model, [X, Y], n_permutations=10)
which returns score (average correlation across dimensions), permutation scores, p-value like scikit-learn
Although I got a lot insights from your nice discussion here, one thing I am still confused: If I permute one variable once from all views (variable number: 1), or if I permute 1/5 of total variables in all views (variable number: around 10), the resulted significance p value is very different.
It's understandable that when permuting more variables, the random level is high, so that the canonical correlation coefficient is low compared to the reference experiment. But do you know permuting how many variables is common? And permutation should be done for each view once or multiple views?
Many thanks in advance!
Cheers, Wantong
I would like to implement (if not already available elsewhere) a Python version of the permutation tests for CCA described in
The paper comes with a repository (Matlab) that could be ported to Python without requiring additional dependencies.