ThomasYeoLab / CBIG

MIT License
570 stars 380 forks source link

run Schaefer 18 parcellation code with HCP data #29

Closed snapfinger closed 2 years ago

snapfinger commented 2 years ago

Hi there, I'm trying to run Schaefer18 parcellation code with HCP data. I've computed average gradients of both hemispheres with Gordon's code, which is of dimension #vertices #vertices (excluding medial wall). However, loading a sample gradient matrix in this repo will give two variables: border and border_matrix, each with dimension 1 #vertices and 6 * #vertices (including medial wall). It's not clear to me how to compute these gradient-related variables, which are used for the actual parcellation computing, from the gradient matrix I have?

Second, there is a function CBIG_build_sparse_gradient in CBIG_gwMRF_graph_cut_clustering_split_newkappa_prod.m, with comment "this function only works for fsaverageX". First, I wonder what this function is for, i.e. why we need to compute another "sparse gradient" from the gradient files, and second, with HCP data, does this function, which seems to be computing neighborhood vertices but treating first 12 vertices and the remaining ones differently, are applicable to HCP surface space (fsLR32k)?

Thanks.

alexandschaefer commented 2 years ago

Hi,

As a disclaimer. I have not worked on this code for 6 years. I hope my memories are correct. border was the gradient map we received from Gordon. I am surprised you have got something in the shape of #vertices * #vertices. I think it should be whatever he mapped in the paper. Which would then we certainly only 1 x #vertices. The fsaverage mesh is a graph where each vertex has 6 Neighbors with exception of the first 12 vertices they have 5? Neighbors. In order to get weights for the edges which we need for the MRF.

Regarding CBIG_build_sparse_gradient The first 12 vertices are treated extra has they do not have 6 neighbors. I think what is important in this function is that we apply CBIG_StableE. Which applies an exponential function on the gradient weights. In addition it uses the matlab sparse function https://www.mathworks.com/help/matlab/ref/sparse.html to create a sparse representation. I guess your question is why we switch between dense and sprarse representations. Might have been that some operations had not implemented for sparse matrices.

You would need to load the fsLR32k surface you want to use and look at the neighborhood structure. And maybe adjust accordingly. If they e.g. all have 6 neighbors you could just simplify the code by not treating the first 12 separately.

Hope this helps.

snapfinger commented 2 years ago

Hi @alexandschaefer, thanks a lot for your detailed reply! Regrading the gradient from Gordon's method, I checked the code I ran, and found there is one step after the #vertices #vertices output which computes the average along one dimension (though this line of code was commented with "unused but useful to look at" in his released code). I guess your 1 #vertices gradient input is probably this averaged gradient value for each vertex?

Matilda-Yxx commented 2 years ago

Hi @snapfinger. Regarding your latest question: yes, the 1 * #vertices gradient input should be the averaged gradient value for each vertex (consistent with the method described in Gordon's original paper).

snapfinger commented 2 years ago

Got it, thanks!