MICA-MNI / BrainSpace

BrainSpace is an open-access toolbox that allows for the identification and analysis of gradients from neuroimaging and connectomics datasets | available in both Python and Matlab |
http://brainspace.readthedocs.io
BSD 3-Clause "New" or "Revised" License
182 stars 73 forks source link

Question about 'procrustes' #42

Closed ly6ustc closed 3 years ago

ly6ustc commented 3 years ago

Hi, I'm learning to use BrainSpace for gradient calculation. And I have some questions about aligning the gradient components. Usually, for fair comparisons, we need to align the gradient components of each individual to the group template. And "Procrustes alignment" seems a good choice. Should we put all individual data into this procedure at once? As an example in paper "BrainSpace: a toolbox for the analysis of macroscale gradients in neuroimaging and connectomics datasets", it appears that two subject data are added to the procedure at once (see figure below). image But I also find some papers that add the individual data, one by one. Importantly, the results of these two strategies are not consistent (.aligned_). I guess it because that reference template is associated with all individual data, as the figure below. image So which strategy is right or recommended? Besides, if eigenvalue multiplicity or very close eigenvalues lead to different component orderings between individuals, could "Procrustes alignment" work this case out? would it realign the component orders?
Finally, I found some papers reported the explanation rate of component. In my opinion, it's the percentage of the lambda of that component to the total lambdas. If it is right, should we calculate all components (i.e., N-1, N is the size of similarity matrix) to obtain the total lambdas? Or the total lambdas are restricted by the component numbers that we set rather than the maximum component numbers. After "Procrustes alignment", is the explanation rate of component unchanged?

Thanks!

ReinderVosDeWael commented 3 years ago

Hi,

I'm not aware of any formal comparisons on aligning all subjects simultaneously vs separately. Intuitively, i'd expect aligning subjects individually would result in a better matching to the reference whereas aligning them simultaneously would result in a better alignment across subjects. In my work I've always aligned all subjects simultaneously, but truth be told I've never tried running each subject separately.

Procrustes alignment only uses the eigenvectors, not eigenvalues. As long as the eigenvectors are reasonably similar then procrustes alignment should resolve different component orderings.

The percentage of a lambda of the total lambdas has indeed been used as a "variance explained" metric. Strictly speaking, it's not exactly variance explained, but it can still be used as a rule of thumb for the importance of each component.

Procrustes alignment aligns the unaligned gradients but leaves the lambdas as is. As such the lambas are associated only with the unaligned gradients. When overall rotations are relatively small, this isn't a big issue, but this is use-case dependent.

ly6ustc commented 3 years ago

Hi,

I'm not aware of any formal comparisons on aligning all subjects simultaneously vs separately. Intuitively, i'd expect aligning subjects individually would result in a better matching to the reference whereas aligning them simultaneously would result in a better alignment across subjects. In my work I've always aligned all subjects simultaneously, but truth be told I've never tried running each subject separately.

Procrustes alignment only uses the eigenvectors, not eigenvalues. As long as the eigenvectors are reasonably similar then procrustes alignment should resolve different component orderings.

The percentage of a lambda of the total lambdas has indeed been used as a "variance explained" metric. Strictly speaking, it's not exactly variance explained, but it can still be used as a rule of thumb for the importance of each component.

Procrustes alignment aligns the unaligned gradients but leaves the lambdas as is. As such the lambas are associated only with the unaligned gradients. When overall rotations are relatively small, this isn't a big issue, but this is use-case dependent.

Thank you very much! It helps a lot.