Open chrisgorgo opened 6 years ago
OK, I wrote a script to do that. The basic idea is simple, it's just a question of inverting the maskers used in signal extraction code (https://github.com/ramp-kits/autism/blob/master/preprocessing/extract_time_series.py). But I went further, and wrote code to blend multiple atlases. The results are cool.
The following assumes that you have run the download script for basc197 and craddock running:
$ python download_data.py craddock_scorr_mean $ python download_data.py basc197
Here is the coded:
from problem import get_train_data
data, labels = get_train_data()
from nilearn import input_data, datasets
import pandas
#######################################################################
# Reconstruct an image for BASC 197
# Grab the data for the first subject:
basc_series = pandas.read_csv(data['fmri_basc197'].iloc[0])
basc_atlas = datasets.fetch_atlas_basc_multiscale_2015()
basc_masker = input_data.NiftiLabelsMasker(labels_img=basc_atlas.scale197)
# We need to fit the masker to set its voxel size and mask
basc_masker.fit(basc_atlas.scale197)
imgs_basc = basc_masker.inverse_transform(basc_series)
# Let inspect the first image
from nilearn import plotting, image
plotting.plot_epi(image.index_img(imgs_basc, 0),
title='From BASC197 atlas',
cut_coords=(1, -71, 31))
#######################################################################
# Reconstruct an image for Craddock
# Grab the data for the first subject:
craddock_series = pandas.read_csv(data['fmri_craddock_scorr_mean'].iloc[0])
craddock_atlas = datasets.fetch_atlas_craddock_2012()
craddock_masker = input_data.NiftiLabelsMasker(
labels_img=image.index_img(craddock_atlas.scorr_mean, 25))
# We need to fit the masker to set its voxel size and mask
craddock_masker.fit(craddock_atlas.scorr_mean)
imgs_craddock = craddock_masker.inverse_transform(craddock_series)
# Let inspect the first image
plotting.plot_epi(image.index_img(imgs_craddock, 0),
title='From Craddock atlas',
cut_coords=(1, -71, 31))
#######################################################################
# Blend the two
# The challenge here is that the coverage of these two atlases are
# different. We will rely on numpy's nanmean for that
# You may ponder a bit on the expression below... It's not that hard
imgs_blended = image.math_img(
"""np.nanmean([
np.where(basc_labels[..., np.newaxis] == 0,
np.nan, imgs_basc),
np.where(craddock_labels[..., np.newaxis] == 0,
np.nan, imgs_craddock),
], axis=0)""",
basc_labels=basc_masker.labels_img_,
imgs_basc=imgs_basc,
craddock_labels=image.resample_to_img(craddock_masker.labels_img_,
imgs_basc, interpolation='nearest'),
imgs_craddock=image.resample_to_img(imgs_craddock, imgs_basc),
)
plotting.plot_epi(image.index_img(imgs_blended, 0),
title='Blended',
cut_coords=(1, -71, 31))
Keep in mind that the servers used to run the submissions have limited memory, and that if you do this to 2000 subjects, you will create a lot of data, so the danger is to bring the servers down, which means that your submission won't run. Also, don't use this to do crazy things, as you'll blow your computing quotas.
I am currently trying to do the same but with the msdl atlas. This is what I am doing (just a copy-paste of your code)
# Reconstruct an image for MSDL
# Grab the data for the first subject:
msdl_series = pandas.read_csv(data['fmri_msdl'].iloc[0])
msdl_atlas = datasets.fetch_atlas_msdl()
msdl_masker = input_data.NiftiLabelsMasker(labels_img=msdl_atlas.maps)
# We need to fit the masker to set its voxel size and mask
msdl_masker.fit(msdl_atlas.maps)
imgs_msdl = msdl_masker.inverse_transform(msdl_series)
But it results in a error
*** nilearn._utils.exceptions.DimensionError: Input data has incompatible dimensionality:
Expected dimension is 3D and you provided a 4D image. See http://nilearn.github.io/manipulating_images/input_output.html.
And that is weird because I thought we were always dealing with 4D images independently of the atlas. How do we select the first 3D image of the msdl atlas? And what time correspond the images we see for Craddock and BASC?
Thank you!
msdl_masker = input_data.NiftiLabelsMasker(labels_img=msdl_atlas.maps)
MSDL is a probabilistic atlas, it has maps instead of labels. You need to use a NiftiMapsMasker (and change 'labels_img' to 'maps_img'.
Typical neuroimaging pipelines work from images. Hence it would be interesting to turn the extracted signals into images.