netneurolab / neuromaps

A toolbox for comparing brain maps
https://netneurolab.github.io/neuromaps
Other
241 stars 55 forks source link

power usage perm=1000 and GPU #162

Closed baharhazal closed 2 weeks ago

baharhazal commented 5 months ago

Description of issue

Hi,

We are currently integrating Neuromaps into our ongoing projects. We initiated testing with a single null model, burt2018, using 1000 permutations in the fsaverage 10k space. However, it has been over a week without any progress. Is there a possibility that the function may have malfunctioned without producing any errors or warnings?

Additionally, we are exploring ways to leverage GPU resources on our machines for neuromaps. Thus far, we have been unsuccessful in maximizing their utilization. I apologize for my lack of expertise in this area, but I am hopeful that you can provide guidance on the correct approach to achieve this within the neuromaps.

Thanks in advance,

Code of Conduct

justinehansen commented 3 months ago

Hi, sorry about the delayed response! The parametric (e.g. burt, moran) nulls will take a really long time for dense maps. Since your data is on the surface, I strongly recommend using the spatial permutation nulls (i.e. "spin test") which'll be muuuch faster. For example, try alexander_bloch.

baharhazal commented 3 months ago

Thank you for your reply and suggestion. We currently have both volumetric and surface data available. I'm planning to try spatial permutation nulls (i.e., 'spin test'). However, we're encountering a BUS Error despite having sufficient memory during parametric null permutation 10 with surface MNI data. Do you know what might be causing this issue, or is there an alternative solution besides your suggestions of spin test and parcellation? Is there a way to predict the amount of sufficient memory? Thanks :)

liuzhenqi77 commented 2 weeks ago

Hi

For the BUS error, we're not sure what happened without more context or debug info. It is unlikely to be directly related to neuromaps, though.

I agree with Justine that spin test and parcellation might be the best choice right now. Parametric nulls at the voxel/vertex-level require a lot of memory and storage space. You could run them for smaller/parcellated data to get a rough estimate of time and memory usage. GPU is also unlikely to help unless functions from BrainSMASH are re-designed with specific optimizations.

I'm closing this for now but please feel free to reopen if you have any updates.