After looking at the results of greenelab/netreg#4, we were curious as to how similar the latent space representations learned by the algorithms implemented so far actually are (since they all seem to have very similar classification performance).
In 5.analyze_plier_compression.ipynb, I compare the sparsity of the weight matrices for all of the algorithms run so far (PCA, ICA, NMF, and PLIER with 3 different pathway datasets). I also compare the correlation of the weight matrices themselves using SVCCA. Results seem to suggest that PLIER is learning a distinct latent space representation from the other 3 methods.
The utilities/cca_core.py script comes from the original implementation of SVCCA at https://github.com/google/svcca, so it isn't necessary to review this code in detail (unless you want to).
After looking at the results of greenelab/netreg#4, we were curious as to how similar the latent space representations learned by the algorithms implemented so far actually are (since they all seem to have very similar classification performance).
In
5.analyze_plier_compression.ipynb
, I compare the sparsity of the weight matrices for all of the algorithms run so far (PCA, ICA, NMF, and PLIER with 3 different pathway datasets). I also compare the correlation of the weight matrices themselves using SVCCA. Results seem to suggest that PLIER is learning a distinct latent space representation from the other 3 methods.The
utilities/cca_core.py
script comes from the original implementation of SVCCA at https://github.com/google/svcca, so it isn't necessary to review this code in detail (unless you want to).