ccraddock / cluster_roi

Python script to perform the spatially constrained parcellation of functional MRI data as described in Craddock et al., Human Brain Mapp., 33 (9) 2012
http://ccraddock.github.io/cluster_roi/
39 stars 26 forks source link

Memory Error in group_binfile_parcellation #6

Open mehrshadg opened 7 years ago

mehrshadg commented 7 years ago

Hey,

I am using this package to parcellate 60 fMRI data. When I run the group_binfile_parcellation script, I get Memory Error. My computer has 16 GB of RAM. I read the code and the line in which you are calculating: W=W + csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double) raises Memory Error after about 4 data are processed. So I changed it to W+=csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double) to prevent numpy from creating another array. But again the error occurred after 7 data.

Is there a way to workaround this issue? Somehow optimizing the code? I am not familiar with Python and its optimization techniques.

ccraddock commented 7 years ago

What is the resolution of your data? Did you use a gray matter mask? How many voxels are in the mask?

Sent from my iPhone

On Feb 14, 2017, at 9:47 AM, mehrshadg notifications@github.com wrote:

Hey,

I am using this package to parcellate 60 fMRI data. When I run the group_binfile_parcellation script, I get Memory Error. My computer has 16 GB of RAM. I read the code and the line in which you are calculating: W=W + csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double) raises Memory Error after about 4 data are processed. So I changed it to W+=csc_matrix((ones(len(sparse_i)),(sparse_i,sparse_j)), (n_voxels,n_voxels),dtype=double) to prevent numpy from creating another array. But again the error occurred after 7 data.

Is there a way to workaround this issue? Somehow optimizing the code? I am not familiar with Python and its optimization techniques.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

mehrshadg commented 7 years ago

Yes I used a gray matter mask. I created an average binary gray matter mask and then standard it to the MNI 152. Each of my functional data are also standardized to MNI 152. For functional TR is 2.2. The mask contains 181676 non-zero and 720953 zero voxels. I used (nibabel.load(gm_mask_standard).get_data().flatten() > 0).sum() to calculate total number of non-zero voxels

ccraddock commented 7 years ago

What is your vocal size?

181,767 is very large for your Grey matter mask. In the paper, which used 4mm isotropic voxel sizes there were about 18,500 voxels. I'm guessing yours may be at 2mm ISO, which is likely higher resolution than what you acquired the data at. This upsampling does not provide any new information and only serves to make the problem more computationally intensive.

I would use a larger voxel size or a computer with significantly more RAM.

Sent from my iPhone

On Feb 16, 2017, at 2:16 AM, mehrshadg notifications@github.com wrote:

Yes I used a gray matter mask. I created an average binary gray matter mask and then standard it to the MNI 152. Each of my functional data are also standardized to MNI 152. For functional TR is 2.2. The mask contains 181676 non-zero and 720953 zero voxels. I used (nibabel.load(gm_mask_standard).get_data().flatten() > 0).sum() to calculate total number of non-zero voxels

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

mehrshadg commented 7 years ago

The original structural and functional voxel sizes are 1mm and 3mm isometric, but when I created the structural mask I standardized both my mask and my functional data to MNI 152 with 2mm isometric voxels. So based on what I have understand, you are telling my to down sample my mask to 3mm voxels, and standardize both the data and mask with 3mm voxels?