nilearn / nilearn

Machine learning for NeuroImaging in Python
http://nilearn.github.io
Other
1.16k stars 587 forks source link

[BUG] plots showing significant results from "outside" the brain #4111

Open firupolat opened 9 months ago

firupolat commented 9 months ago

Is there an existing issue for this?

Operating system

Operating system version

For example one of the following:

Python version

nilearn version

0.10.2

Expected behavior

I did first and then second level glm with my resting state data, and the plots always dont match the brain image from the background. see attached the firstlevel glm plot ob subject40 with contrast yeo network 7, as well as second level glm plot from 40 participants pre and post intervention with contrast drug vs. placebo

sub40_yeo7

ts7

Current behavior & error messages

This is what I got:

i dont have any error messages but the plots that i get are not good, since they show significant signal from outside the brain. and also the significant results that i get are not even making sense sometimes. see the plots that i postet in the section above for the analysis of yeo network 7. maybe this happens bc i have preprocessed the data with fmriprep and have all the data in the MNI152NLin2009cAsym and this doesnt allign with the MNI space of nilearn.

# Paste the error message here

Steps and code to reproduce bug

# Paste your code here

## FIRST LEVEL GLM
for sub in subject_ids:
    for ses in session_nrs:
        # load data
        path_to_imgs = f'{root}/{fmriprep}/{sub}/{ses}/func/{sub}_{ses}_task-{task}_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz'
        fmri_img = load_img(path_to_imgs)
        fmri_img = image.smooth_img(fmri_img, 6.0)

        # load the mask for first level GLM
        path_to_mask = f'{root}/{fmriprep}/{sub}/{ses}/func/{sub}_{ses}_task-{task}_space-MNI152NLin2009cAsym_desc-brain_mask.nii.gz'
        mask_img = load_img(path_to_mask)

        # masker for extracting time_series from ROIs
        masker = NiftiLabelsMasker(labels_img=yeo["thin_7"], mask_img=mask_img, 
                                   standardize="zscore_sample", t_r=t_r
                                  )

        # define strategy
        for TAPAS, tapas_string in zip(TAPAS_strategy, TAPAS_strategy_string):
            confounds, sample_mask = load_confounds(img_files=path_to_imgs, strategy=TAPAS, motion="full", 
                                                    global_signal="basic", scrub=0, fd_threshold=0.5, 
                                                    std_dvars_threshold=3
                                                   )

            # Include physio regressors
            ## ...
            # define all regressors
            all_regs = pd.concat([confounds, physio_df], axis=1)

            fmri_img = image.clean_img(fmri_img, detrend=True, standardize=True, confounds=all_regs, 
                                       low_pass=0.08, high_pass=0.01, t_r=t_r, mask_img=mask_img)    

            # time_series from ROIs
            time_series = masker.fit_transform(fmri_img, 
                                               #confounds=all_regs
                                              )
            time_series_df = pd.DataFrame(time_series)

            # create design matrix
            design_matrix = make_first_level_design_matrix(frame_times=frametimes, hrf_model=None, drift_model = None, 
                                                           add_regs=time_series_df)

            # create contrast matrices
            yeo_1_T = np.array([0] * (1 - 1) + [1] + [0] * (design_matrix.shape[1] - 1))
            yeo_2_T = np.array([0] * (2 - 1) + [1] + [0] * (design_matrix.shape[1] - 2))
            yeo_3_T = np.array([0] * (3 - 1) + [1] + [0] * (design_matrix.shape[1] - 3))
            yeo_4_T = np.array([0] * (4 - 1) + [1] + [0] * (design_matrix.shape[1] - 4))
            yeo_5_T = np.array([0] * (5 - 1) + [1] + [0] * (design_matrix.shape[1] - 5))
            yeo_6_T = np.array([0] * (6 - 1) + [1] + [0] * (design_matrix.shape[1] - 6))
            yeo_7_T = np.array([0] * (7 - 1) + [1] + [0] * (design_matrix.shape[1] - 7))

            yeo_networks_TAPAS = [yeo_1_T, yeo_2_T, yeo_3_T, yeo_4_T, yeo_5_T, yeo_6_T, yeo_7_T]

            # make first level glm
            fmri_glm = FirstLevelModel(t_r=t_r, slice_time_ref=slice_time_ref, hrf_model=None, drift_model=None, 
                                       mask_img=mask_img, target_affine=mask_img.affine, target_shape=mask_img.shape, 
                                       #smoothing_fwhm=6
                                      )
            fmri_glm = fmri_glm.fit(fmri_img, sample_masks=sample_mask, design_matrices=design_matrix)

            # contrast for statistical maps
            for yeo_nw_T, yeo_string in zip(yeo_networks_TAPAS, yeo_networks_strings):
                z_map = fmri_glm.compute_contrast(yeo_nw_T, output_type='z_score')

                # Plot the statistical maps
                #plotting.plot_stat_map(z_map, threshold=3.0, cut_coords = (0,-53,26),
                                       #display_mode='mosaic', 
                                    #   title=f'{sub}_{ses}_task-{task}_{tapas_string}_first_level_z_map_{yeo_string}')

                # save the results
                z_map.to_filename(f'{glm_output}/{sub}/{ses}/func/{sub}_{ses}_task-{task}_{tapas_string}_first_level_z_map_{yeo_string}.nii.gz')

''' define input images for second level for TAPAS strategy'''

for sub in subject_ids:
    for ses in session_nrs:
        for yeo_string in yeo_networks_strings:
            for tapas_string in TAPAS_strategy_string:

                if ses == 'ses-01':
                    if yeo_string == 'yeo_1':
                        if tapas_string == 'HMP24_TAPAS_scrub':

                            # load z map without gsr    
                            z_map_path = f'{glm_output}/{sub}/{ses}/func/{sub}_{ses}_task-{task}_{tapas_string}_first_level_z_map_{yeo_string}.nii.gz'
                            z_map = load_img(z_map_path)
                            yeo1_z_pre_sample_TAPAS.append(z_map)

## rest of code...

# SECOND LEVEL GLM

''' make second level glm with z_maps with p001_uncorrected'''

# create design matrix
design_matrix = pd.DataFrame(np.hstack((session_effect[:, np.newaxis], drug_effect[:, np.newaxis], paired_effect)), 
                             columns=["session_effect"] + ["drug_effect"] + subject_ids)

# contrast for statistical maps
second_level_contrast = "drug_effect" # choose between contrast = drug_effect / session_effect

# choose threshold
p_val = 0.001
p001_uncorrected = norm.isf(p_val)
threshold = p001_uncorrected

# define strategy

for yeo_string in yeo_networks_strings:
    for tapas_string in TAPAS_strategy_string:

        if yeo_string == 'yeo_1':
            if tapas_string == 'HMP24_TAPAS_scrub':
                second_level_input = yeo1_z_pre_sample_TAPAS + yeo1_z_post_sample_TAPAS
            elif tapas_string == 'HMP24_TAPAS_scrub_gsr':
                second_level_input = yeo1_z_pre_sample_TAPAS_gsr + yeo1_z_post_sample_TAPAS_gsr

            second_level_model = SecondLevelModel() 
            stat_map_img = second_level_model.fit(second_level_input = second_level_input, design_matrix=design_matrix)
            stat_map_img = second_level_model.compute_contrast(second_level_contrast = second_level_contrast, 
                                                               #first_level_contrast = yeo_nw_T, 
                                                               output_type='z_score')   

            # Plot the statistical maps
            plotting.plot_stat_map(stat_map_img = stat_map_img, 
                                   cut_coords = (0,-53,26),
                                   #display_mode='mosaic', 
                                   title=f'task-{task}_{tapas_string}_second_level_z_map_{yeo_string}', threshold=threshold
                                  )
            # save the results
            stat_map_img.to_filename(f'{glm_output}/task-{task}_{tapas_string}_second_level_z_map_{yeo_string}.nii.gz')

        elif yeo_string == 'yeo_2':

## REST OF THE CODE FOR ALL YEO NETWORKS 1-7
ymzayek commented 9 months ago

@bthirion or @Remi-Gau any intuition about this? Could it be an alignment issue? My narrow understanding of fmriprep and what is described here is that the data and mask resulting from the preprocessing should align with the Yeo atlas in nilearn which is in the same MNI space

Remi-Gau commented 9 months ago

realignment is probably worth checking but I would be surprised if it were the issue in this case

firupolat commented 9 months ago

i have tried different things, for example using an other template, resampling image, using a different background image. but nothing worked :(

bthirion commented 9 months ago

I think that the issue is related to the brain mask you use. Is it exactly the brain mask provided with Nilearn ? Did you resample it somehow ? Can you share it ?

firupolat commented 9 months ago

do you mean mask_img that i used? its the one that was generated by fmriprep automatically for each subject and scan.

bthirion commented 9 months ago

This is probably the culprit. I would advise to use a fixed (independent from dataset) MNI brain template.

firupolat commented 9 months ago

but for example for the function image.clean_img the mask_img has to match the fmri_img and has be binary, otherwise i have an error message. i had used MNI brain template (using templetflow) as background image of the plots, but this still didnt work and i had signals from the scull or outside the scull. how else could i use the MNI brain template?

bthirion commented 9 months ago

You can reinterpolate the MNI brain mask to your data, using nearest neighbor interpolation to keep it binary.

ymzayek commented 9 months ago

@firupolat to do that you can use resample_to_img

mask_img = image.resample_to_img(mni_brain_mask, fmri_img, interpolation="nearest")

at the beginning of your pipeline.

firupolat commented 9 months ago

mask_img = image.resample_to_img(mni_brain_mask, fmri_img, interpolation="nearest")

Okay thank you with this it works generally. But why do I have to perform the steps in this direction and not vice versa. With that I mean, could I not change the order of the image and the mask so that my functional scan is the source image and the mni_brain_mask is the target image?

mask_img = image.resample_to_img(fmri_img, mni_brain_mask, interpolation="nearest")

To me, that makes more sense if it works because I would want to use the resampled functional scan also for other softwares and it would be much easier to just have the same brain size for every scan I have, i.e. that every scan is exactly in the shape of the MNI152Lin2009cAysm brain.

@ymzayek

ymzayek commented 9 months ago

@firupolat that was just an example suggestion following from the discussion but yes you could do the inverse. Note also that NiftiLabelsMasker has some built in resampling functionality that defaults to resample the labels image and mask image to your data; if you choose 'labels' as the target it resamples the mask_img and data to the labels image. If none is chosen it requires you to take care of the resampling beforehand as suggested. See the resampling_target parameter here: https://nilearn.github.io/stable/modules/generated/nilearn.maskers.NiftiLabelsMasker.html#nilearn.maskers.NiftiLabelsMasker