nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
630 stars 292 forks source link

Failed resampling of BOLD into MNI space - starting with v20.1.1 #2307

Closed eburkevt closed 3 years ago

eburkevt commented 3 years ago

We are getting poor MNI normalizations (see images below) using fMRIPrep versions 20.1.1 and 20.2.0 LTS. We had good normalizations with version 20.0.6. I have not tested any other versions.

What version of fMRIPrep are you using?

versions 20.2.0 LTS and 20.1.1

Top image is example of expected normalization in SPM12, middle image is the normalization obtained using fMRIPrep v20.1.1 (similar issues with v20.2.0), bottom image is normalization obtained using fMRIPrep v20.0.6 (no issues).

UCLA_good UCLA_bad UCLA_good_using_20 0 6

What kind of installation are you using? Containers (Singularity, Docker), or "bare-metal"?

Singularity

What is the exact command-line you used?

singularity run --cleanenv ${FMRIPREP}/fmriprep-20.1.1.simg \
    ${basefolder}/${projectname}/ ${basefolder}/${projectname}/derivatives \
    --fs-license-file $HOME/freesurfer6.txt \
    -w ${basefolder}/work \
    --nthreads 2 \
    participant \
    --participant-label ${subjid} 

Have you checked that your inputs are BIDS valid?

Yes, using bids-validator

Did fMRIPrep generate the visual report for this particular subject? If yes, could you share it?

Shared with nipreps@gmail.com on my Google Drive under 'fMRIPrep'

Can you find some traces of the error reported in the visual report (at the bottom) or in crashfiles?

fMRIPrep finished without any errors but produced a poor MNI normalization

Are you reusing previously computed results (e.g., FreeSurfer, Anatomical derivatives, work directory of previous run)?

In the last run (using v20,2,0) I used prior Freesurfer and work output files, but in past runs, I cleared both the Freesurfer derivatives and work directories before running.

fMRIPrep log

If you have access to the output logged by fMRIPrep, please make sure to attach it as a text file to this issue.

fmriprep.o1057033.txt

oesteban commented 3 years ago

Hi @eburkevt, your report does look good overall (there are little things one could pick up on, but they definitely don't explain the spatial normalization issue you are seeing).

To debug this, I think there are a couple of useful things to do on your end:

  1. Make sure that this is happening on all spatially normalized outputs. You posted the sub-10316_task-rest_space-MNI152NLin6Asym_desc-smoothAROMAnonaggr_bold.nii.gz file - could you check whether sub-10316_task-rest_space-MNI152NLin2009cAsym_bold.nii.gz looks awful too?
  2. Find out whether this is a problem of misalignment or an error when concatenating transforms. I think the easiest way to check this is rerunning fMRIPrep with the addition of --output-spaces MNI152NLin6Asym:res-2 argument:
    singularity run --cleanenv ${FMRIPREP}/fmriprep-20.1.1.simg \
       ${basefolder}/${projectname}/ ${basefolder}/${projectname}/derivatives \
       --fs-license-file $HOME/freesurfer6.txt \
       -w ${basefolder}/work \
       --nthreads 2 \
       participant \
       --participant-label ${subjid} \
       --output-spaces MNI152NLin6Asym:res-2
eburkevt commented 3 years ago

I'm seeing this issue also with all the spatially normalized outputs for functional images (sub-10316_task-rest_space-MNI152NLin2009cAsym_bold.nii.gz and its mask), but not anatomical images.

Here's a non-AROMA bold image image

I'll rerun with --output-spaces MNI152NLin6Asym:res-2 and let you know what I find.

Thanks for your help.

eburkevt commented 3 years ago

Hi @oesteban Here is the output for the bold signal. Same issue with output space set to MNI152Lin6Asym:res-2 image

I retained the former Freesurfer and work directory contents to shorten the processing time a bit, so not sure if that would have had any effect.

oesteban commented 3 years ago

I retained the former Freesurfer and work directory contents to shorten the processing time a bit, so not sure if that would have had any effect.

This should be fine.

Can I ask you to share the full report with us? I want to check whether there's any clue in them to suspect this is going on underneath.

eburkevt commented 3 years ago

I shared the full report on my google drive.

I just preprocessed some recently acquired fMRI data (a multiband image sequence) with fMRIPrep v 20.2.0 LTS, This fMRI data has isotropic voxels, 2.1 x 2.1 x 2.1 mm (the problem dataset from UCLA has 3 x 3 x 4 mm voxels). The spatial normalization here looks fine ... is it possible that fMRI data with non-isotropic voxels are the issue?

image

oesteban commented 3 years ago

@eburkevt thanks for the updated reports. I think we can say normalization is pretty good for both MNI152NLin6Asym and MNI152NLin2009cAsym based on the reports. That means the problem is in the resampling to MNI space.

The fact that you don't experience any problems with another dataset also indicates there's something particular about the orientation headers of the failing dataset. Anisotropy of voxels should not play any role here (and fMRIPrep processes anisotropic datasets without issues).

Could you share one subject of the failing dataset for me to debug more thoroughly?

oesteban commented 3 years ago

Seems to me like an interaction between #2146 and inconsistent s/q-form matrices could be the culprit. I also suspect #2284 is related.

WDYT @effigies ?

effigies commented 3 years ago

If we could get the headers of a good and bad file from the same study, that would simplify diagnosis.

eburkevt commented 3 years ago

I attached the first Nifti rsfMRI volume (gzip) from subject 10316 from the UCLA fMRI dataset in an e-mail.

Here's an example of the s/q-form header info (please see e-mail reply)

image

Thanks,

Chris

On 10/24/2020 6:53 PM, Chris Markiewicz wrote:

If we could get the headers of a good and bad file from the same study, that would simplify diagnosis.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/nipreps/fmriprep/issues/2307#issuecomment-716065582, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHT2CGZHQ3IRO7TIT7LOD3SMNLHLANCNFSM4SVHCHNA.

oesteban commented 3 years ago

Hi @eburkevt, this is ds030 so it is available at openneuro. That's great news as we have tons of data for testing.

eburkevt commented 3 years ago

Hi @oesteban, that's good news.

I was running some tests using fmriprep v20.2.0 LTS and did find that if I turned off Freesurfer reconstruction (--fs-no-reconall), the MNI normalized images for the subject above (sub-10316 from ds030) appear to be okay. I don't if this is diagnostic or not, but thought I would pass it along. Here was my fmriprep call ..

singularity run --cleanenv ${FMRIPREP}/fmriprep-20.2.0.lts.simg \ ${basefolder}/${projectname}/ ${basefolder}/${projectname}/derivatives \ --fs-license-file $HOME/freesurfer6.txt \ -w ${basefolder}/work20.2LTS \ --nthreads 2 \ --fs-no-reconall \ participant \ --participant-label ${subjid}

image

eburkevt commented 3 years ago

Was this issue fixed in the latest release of fMRIPrep (version 20.2.1 release November 6, 2020)?

effigies commented 3 years ago

No, sorry, we haven't gotten to this yet. I'm not sure about anybody else's schedule, but I for one won't have a chance to dig into this until the new year.

eburkevt commented 3 years ago

Okay, thanks for the update. 

Is there a way to tell if an fMRI dataset will be affected by this bug other than by visual examination of the MNI normalization? For example, is there anything in the qform/sform headers that is diagnostic? The latest versions of fMRIPrep seem to work okay with most of our datasets, and only the UCLA and COBRE data sets seem to be affected so far.

On 11/24/2020 10:11:29 AM, Chris Markiewicz notifications@github.com wrote: No, sorry, we haven't gotten to this yet. I'm not sure about anybody else's schedule, but I for one won't have a chance to dig into this until the new year. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub [https://github.com/nipreps/fmriprep/issues/2307#issuecomment-733036319], or unsubscribe [https://github.com/notifications/unsubscribe-auth/ABHT2CDFPSOTFHFM35ZIASLSRPEKBANCNFSM4SVHCHNA].

alexjnt commented 3 years ago

Hi @eburkevt , I am currently using the same UCLA dataset in the frame of my bachelor thesis and I'm experiencing a very similar problem in the normalization of BOLD into MNI space... fMRIPrep runs with no errors but the preprocessed BOLD image looks highly distorted (see attachment). I'm wondering if you found a solution to the problem. image

eburkevt commented 3 years ago

I do not have a solution to this problem. I would suggest using an older version of fMRIPrep from at least as far back as the 20.0.7 version to preprocess the UCLA (and similar) datasets.

soichih commented 3 years ago

Here is another sample output from 20.2.1 with UCLA data

image

effigies commented 3 years ago

Thanks for the posts, all. Just a note that I'm collecting bugs to prepare for a new push, and I hope to diagnose this one in the next week or so. Please feel free to continue adding any information you think would be helpful.

effigies commented 3 years ago

I got similar results as https://github.com/nipreps/fmriprep/issues/2307#issuecomment-719114115 for sub-10316 with --fs-no-reconall. The overall alignment isn't great when looking at reports. Retrying with FreeSurfer to see whether BOLD-T1w improves and BOLD-MNI degrades.

oesteban commented 3 years ago

Successfully replicated the issue with the bleeding-edge version of fMRIPrep. Now attempting to reproduce with maint/20.1.x, will try to pin this down during this week.

oesteban commented 3 years ago

Replicated with 20.1.3 and FS (i.e., bbregister). Investigating https://github.com/nipreps/fmriprep/compare/20.0.7...20.1.3

oesteban commented 3 years ago

I suspect that this dataset is some sort of edge-case in terms of orientation headers of the EPI.

Seems like the fsnative-to-T1w transforms are consistent between 20.0.7 and 20.1.3:

diff work/fmriprep_wf/single_subject_10316_wf/anat_preproc_wf/surface_recon_wf/fsnative2t1w_xfm/T1_robustreg.lta work-20.0.7/fmriprep_wf/single_subject_10316_wf/anat_preproc_wf/surface_recon_wf/fsnative2t1w_xfm/T1_robustreg.lta
2c2
< # created by UNKNOWN on Wed Jul 14 07:22:16 2021
---
> # created by UNKNOWN on Wed Jul 14 10:03:32 2021
15c15
< filename = /out/freesurfer/sub-10316/mri/T1.mgz
---
> filename = /freesurfer/sub-10316/mri/T1.mgz

But, our concatenation and conversion from lta to itk with nitransforms is not fantastic at the moment:

diff work-20.0.7/fmriprep_wf/single_subject_10316_wf/func_preproc_task_bart_wf/bold_reg_wf/bbreg_wf/fsl2itk_fwd/affine.txt work/fmriprep_wf/single_subject_10316_wf/func_preproc_task_bart_wf/bold_reg_wf/bbreg_wf/concat_xfm/out_fwd.tfm
3,4c3,4
< Transform: MatrixOffsetTransformBase_double_3_3
< Parameters: 0.9998190973078196 0.016576133910233114 0.009349958749651045 -0.016744168606405128 0.999694406176838 0.018189181204341154 -0.0090455963632269 -0.018342598066480253 0.9997909622909663 0.4377946306038609 -0.29233498305062255 10.05577289111443
---
> Transform: AffineTransform_float_3_3
> Parameters: 0.999828 0.0160305 0.00930257 -0.0128716 0.869557 -0.108781 -0.00802657 -0.187978 1.17333 0.455154 1.12975 11.3609

Effectively, using the 20.0.7's bold-to-t1 file and the remainder parameters to antsApplyTransforms from 20.1.3 produces the expected outcome.

oesteban commented 3 years ago

At least one source of problems comes from nitransforms: poldracklab/nitransforms#125 - I haven't checked the concatenation yet, but it is likely to also induce issues.

effigies commented 3 years ago

I think the quick fix is to lta_convert to RAS2RAS before concatenating with nitransforms, right?

oesteban commented 3 years ago

Even faster, the node is already there https://github.com/nipreps/fmriprep/blob/eab11908b2e52614510877e66ae24642a18d0fca/fmriprep/workflows/bold/registration.py#L547-L548

LukeJNor commented 3 years ago

Hi, we recently ran a very large collection (N is five figures) of datasets through version 20.2.0 LTS, and just became aware of this isse. For now we are spot checking each study for artifacts that look like the above. If the scans pass our visual QC, can we proceed with the images that we have already preprocessed, or are there more global issues with the resampling to MNI space in this release that might warrant complete re-processing of the images?

effigies commented 3 years ago

I believe so, though that's really your call. The transform is wrong, but if the delta is small, you may not care.

Might be worth picking a subset of 20-50, calculating measures of interest with the old and new versions, and see if there's a discernible effect at the level you care about.

Oscar may have a better intuition about what edge cases were actually inducing the error. It may have only been certain orientations, or anisotropic voxels...

oesteban commented 3 years ago

I believe the culprit is the ordering of axes, when the largest cosines are not ordered x, y, z. But yes, it would be interesting to check on at least a few subjects.

That said, good news is that you can reuse the work directory if you happened to preserve it.

oesteban commented 3 years ago

We managed to replicate and resolve this in #2444, for one of the subjects given for example above.

Other users have reported the fix also worked for them in #2410.

So it seems this can be safely closed. Please reopen just in case I'm wrong.

effigies commented 3 years ago

Okay, so the specific error condition is anisotropic voxels, but there's a class of images with anisotropic voxels that would not be affected:

If the voxel sizes are (x, x, y), such as (3mm x 3mm x 4mm) AND the odd-axis-out is not oblique, e.g., moving along it moves either right/left, anterior/posterior, or inferior/superior, but not a combination of these. If these conditions are met, then the bug will have no impact. The severity of the bug will scale with the severity of the violation.

Edit: That wasn't entirely true. Adding a new comment to ensure it gets seen.

effigies commented 3 years ago

Here's a simple script to test:

#!/usr/bin/env python
import sys
import numpy as np
import nibabel as nb

def test(img):
    zooms = np.array([img.header.get_zooms()[:3]])
    A = img.affine[:3, :3]

    cosines = A / zooms
    diff = A - cosines * zooms.T

    return not np.allclose(diff, 0), np.max(np.abs(diff))

if __name__ == "__main__":
    for fname in sys.argv[1:]:
        affected, severity = test(nb.load(fname))
        if affected:
            print(f"{fname} is affected ({severity=})")

Anything below 1e-7 seems safe. Up to 1e-2 seems likely to be fine, but I'd check. Above that, I would not be surprised to see visible artifacts, but again would check.

effigies commented 3 years ago

For reference, the image Oscar was testing with had severity ~0.728.

utooley commented 3 years ago

I ran with 20.2.0 and recently saw this warning. I have not seen anything like the errors shown above when spot-checking a few subjects outputs to MNI and an MNI-space pediatric template, and also ran the script on all my EPI inputs, giving all severity = 0.0. In this case, just to confirm, I should be safe to continue using the MNI outputs from 20.2.0, correct? I wouldn't need to rerun?

effigies commented 3 years ago

Correct, that should be fine. Though I'd be interested in a file that showed both affected and a severity of 0. Could you share the first kilobyte of such a file?

CFGEIER commented 3 years ago

Hi, having issues running the script above. First, I'm getting an invalid syntax error on the last print( command. Is the script as posted above correct? Also (and apologies for being simplistic), should we simply run the script as it or change the (img) to the preproc_bold.nii file?
Thanks in advance for you time.

effigies commented 3 years ago

The script should work on Python 3.6 and higher. The syntax error is probably due to the f-string.

It's intended to be run with python script.py file1.nii.gz file2.nii.gz

carolinarsm commented 3 years ago

The script should work on Python 3.6 and higher. The syntax error is probably due to the f-string.

It's intended to be run with python script.py file1.nii.gz file2.nii.gz

Had the same issue earlier (servers running python 3.6). Apparently the {var=} is a feature since 3.8. https://docs.python.org/3/whatsnew/3.8.html#f-strings-support-for-self-documenting-expressions-and-debugging

effigies commented 3 years ago

Ah, thanks. You can remove the =, it just won't show the word severity.

LukeJNor commented 3 years ago

Correct, that should be fine. Though I'd be interested in a file that showed both affected and a severity of 0. Could you share the first kilobyte of such a file?

Are you simply after a file with severity of 0.0, in which case I have examples of that from publicly accessible data? Or have misunderstood?

CFGEIER commented 3 years ago

Hello, still unable to run the script (using python 3.8.5). Could someone please be explicit for me about which file(s) need to be checked with this approach and the syntax used to run the script? Newbie with python.

utooley commented 3 years ago

@effigies I don't have an affected file with a severity of 0.0, sorry if what I posted was unclear! I was trying to check for any way my dataset might have been affected despite not seeing any of the artifacts above.

And @CFGEIER --I changed the last 4 lines of the script to the below, which will print severity for each file and should work with earlier than 3.8 python, and saved it as saved_testing_script.py, then ran it using the loop below (hacky but it was the fastest way for me).

if __name__ == "__main__":
    for fname in sys.argv[1:]:
        affected, severity = test(nb.load(fname))
        print(f"{fname} has severity ({severity})")
        if affected:
            print(f"{fname} is affected ({severity})")

Loop:

for sub in `cat $subjectlist`
do
  scans=`find sub-${sub}/ses-01/func/ -iname "sub-${sub}_*_bold.nii.gz"`
  echo $scans
  python saved_testing_script.py $scans
done
CFGEIER commented 3 years ago

@utooley - thank you so much! Got it working now :)

effigies commented 3 years ago

Thanks for the clarification, @utooley.

@LukeJNor If you have some files that are both affected (e.g., the first return value of test() is True) and with severity 0, I would like to see them. Public or private.

djarecka commented 3 years ago

I have multiple images from abide studies that seems to be affected. I understand that I should use 20.2.3 and rerun my analysis, but it's not clear to me if I can use anything from the previous work directory (run with fmriprep20.2.1)

oesteban commented 3 years ago

In principle it should be safe to reuse the working directory because the nipype graph has changed.

On Fri, Aug 20, 2021, 17:17 Dorota Jarecka @.***> wrote:

I have multiple images from abide studies that seems to be affected. I understand that I should use 20.2.3 and rerun my analysis, but it's not clear to me if I can use anything from the previous work directory (run with fmriprep20.2.1)

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/nipreps/fmriprep/issues/2307#issuecomment-902767845, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAESDRRNWFMDJKWYFQBMMI3T5ZWY5ANCNFSM4SVHCHNA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

djarecka commented 3 years ago

That what I thought, but I found @effigies comments in #2498 that suggests I should create a new working directories

mgxd commented 3 years ago

Generally, reusing within the same minor version should be okay

effigies commented 3 years ago

You should be able to reuse working directories within a minor release series.

JessyD commented 3 years ago

Hi, we processed a few commonly used datasets, and after we saw the warning for version 20.2.0, I went back to check how many subjects were affected. So, I am posting the information below as it might be useful for others. All the information is for resting state fMRI only.

    affected  / number of subjects on dataset
ukb      38703    / 38703
pnc        0      / 1394
hcp       951     / 1080
cobre     177     / 177
fbirn II   5      / 5

For the datasets that were affected, these are some statistics for their severity.

UKBIOBANK:

 Mean: 0.00518193539885735
 STD: 0.0016678250764811612
 Min: 4.714077218269039e-05
 Max: 0.0899535575234005

HCP

 Mean: 1.6014423472312408e-07
 STD: 1.2199390972215357e-07
 Min: 0.0
 Max: 8.087149296898133e-07

Cobre

 Mean: 0.3328491497572712
 STD: 0.09601558139962757
 Min: 0.09060217156461481
 Max: 0.5646974185071048

fbirn II

 Mean: 0.15869213891835365
 STD: 0.14207928970998107
 Min: 0.0020258594995189623
 Max: 0.3462542458014055

I hope it helps!