BrainSpace is an open-access toolbox that allows for the identification and analysis of gradients from neuroimaging and connectomics datasets | available in both Python and Matlab |
The official documentation states that "The lambda property stores the variance explained (for PCA) or the eigenvalues (for LE and DM)." When using DM, I am wondering how to obtain percentage of variance explained by each gradient.
If I understand correctly, the eigenvalues in DM are scaled. It seems doubtful to divide each eigenvalue by the sum of all eigenvalues to get the percentage variance explained. Is there a workaround or this is a valid approach?
An alternative thinking is even though the eigenvalues are scaled, their relative magnitudes still provide valuable information. Higher eigenvalues correspond to gradients that capture more prominent data variation. Thus, one can still use the eigenvalues to compare the importance of different gradients. The caveat is that the resulting percentages might not represent the 'variance explained' in the conventional sense (as it would with PCA), but rather the relative importance of each gradient in capturing the data variation.
The official documentation states that "The lambda property stores the variance explained (for PCA) or the eigenvalues (for LE and DM)." When using DM, I am wondering how to obtain percentage of variance explained by each gradient.
If I understand correctly, the eigenvalues in DM are scaled. It seems doubtful to divide each eigenvalue by the sum of all eigenvalues to get the percentage variance explained. Is there a workaround or this is a valid approach?
An alternative thinking is even though the eigenvalues are scaled, their relative magnitudes still provide valuable information. Higher eigenvalues correspond to gradients that capture more prominent data variation. Thus, one can still use the eigenvalues to compare the importance of different gradients. The caveat is that the resulting percentages might not represent the 'variance explained' in the conventional sense (as it would with PCA), but rather the relative importance of each gradient in capturing the data variation.
Thank you.
Lv