nipreps / fmriprep

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.
https://fmriprep.org
Apache License 2.0
612 stars 287 forks source link

Susceptibility distortion correction (SDC) using Spin Echo Fieldmaps #2210

Open sameera2004 opened 4 years ago

sameera2004 commented 4 years ago

Hi,

I pre-processed resting state data using fMRIprep 20.1.1. This data set contain 20 subjects, 2 sessions for each subject, and 4 runs for each session. I used SpinEcho field maps to do the susceptibility distortion correction (SDC). The SDC with SpinEcho’s worked for majority of the data. However, it worsen the SDC’s in certain subjects. Below is the command that I used to run fMRIprep;

singularity run --cleanenv --bind /mnt/fMRIprep /mnt/tools/fMRIprep_sing/fmriprep-20.1.1.simg /mnt/fMRIprep/scR21 /mnt/fMRIprep/scR21_outputs -w /mnt/fMRIprep/scR21_outputs/derivatives/scratch --fs-license-file $PWD/license.txt --write-graph --cifti-output 91k participant --participant-label 50015 50016 50020 50024 50027 50033 50042 50137 50144 --ignore slicetiming --output-spaces T1w MNI152NLin2009cAsym:res-2 fsaverage fsLR I am wondering if anybody else has experienced similar issues. Please let me know.

Thanks in advance Best Regards Sameera

Screen Shot 2020-07-07 at 5 20 14 PM Screen Shot 2020-07-07 at 5 20 25 PM
oesteban commented 3 years ago

Hi @sameera2004, I assume that the above is before SDC and the below shows the image after SDC? If that is the case, you might have a wrong PhaseEncodingDirection parameter in your sidecar JSON. Seems the correction was executed in the opposed direction.

jxvansne commented 3 years ago

Hi @oesteban, Sameera works in my lab. I believe the two images she posted were an AP and a PA phase encode direction run (we alternate directions from one run to the next, HCP style) after SDC for a single subject.

When I first saw this back in June, I also thought the PE direction had been somehow specified wrong. We went back and flipped the specified direction in the json files (i.e., we changed j- to j and j to j-), and this produced even worse results than you see here. I could post the before-and-after SDC results for both directions if you need to see it, but it might take a few days to dig up / redo since we did this back and June, and I'd want to be certain I was posting the correct results.

Ultimately this issue resulted in us using the HCP pipeline rather than fMRIprep for a forthcoming manuscript. I'm in the process of trying to make a final pipeline choice for multiband fMRI in my lab... a side-by-side review of HCP and fMRIprep results showed that on the whole, HCP was doing a better job across more subjects: even though HCP completely failed one subject in a dataset of ~20 subjects (a subject that fMRIprep had no issue with at all), there would overall been more data loss due to unacceptable SDC results with fMRIprep, which was entirely due to SDC results like what you see in the above posts from Sameera.

Let us know what we can post here (.json files, the fMRIprep .html outputs showing SDC, anything else) to help troubleshoot this issue.

Thanks, -Jared

effigies commented 3 years ago

Would you be able to share a problematic subject?

oesteban commented 3 years ago

a side-by-side review of HCP and fMRIprep results showed that on the whole, HCP was doing a better job across more subjects

I'm curious about how you run this QC step. It might be interesting for fMRIPrep to provide some visualization based on your experience. Could you share a bit more about how you did this?

@effigies Would you be able to share a problematic subject?

This would be really important. Is this possible for you?

jxvansne commented 3 years ago

a side-by-side review of HCP and fMRIprep results showed that on the whole, HCP was doing a better job across more subjects

I'm curious about how you run this QC step. It might be interesting for fMRIPrep to provide some visualization based on your experience. Could you share a bit more about how you did this?

We have a MATLAB OOP post-processing pipeline that can now take in either HCP or fMRIprep pre-processed results and do additional analysis. The first major output of that pipeline are the QC plots that Sameera cropped the bottom right quarter of above. So we use those three orthogonal views of the mean EPI image (average over time for a single run) to quickly evaluate SDC / coregistration / normalization. So for 8 runs of data for 20 subjects I want through and did a side-by-side comparison of HCP and fMRIprep. I'm mostly looking at the mid-saggital slices for the length and shape of the corpus callosum, as well as the shape and size of the OFC dropout region and anterior mPFC.

@effigies Would you be able to share a problematic subject?

This would be really important. Is this possible for you?

We're working on this, which is why we hadn't responded yet. I think we should be able to post defaced data in the coming weeks. We'll be looking to provide T1w, T2w, AP/PA spin echo fieldmaps, B0 fieldmaps, and the EPI data itself (defaced). Am I missing anything else?

effigies commented 3 years ago

We're working on this, which is why we hadn't responded yet. I think we should be able to post defaced data in the coming weeks. We'll be looking to provide T1w, T2w, AP/PA spin echo fieldmaps, B0 fieldmaps, and the EPI data itself (defaced). Am I missing anything else?

Any pertinent JSON files. The simplest thing to ensure consistent behavior with what you've seen is the entire dataset (including all dataset-level JSON and TSV files) and just drop all but one or two subjects.

sameera2004 commented 3 years ago

Hi @effigies and @oesteban ,

I am de-identifying problematic subject data. Yes we will include JSON and TSV files too, after removing identifiable personal health information from them. Please let me know where do you want me to upload those data. Thanks Sameera

effigies commented 3 years ago

The simplest way is to upload to OpenNeuro. You can keep the dataset private and share it with me. My login is my GitHub username @ gmail.com.

sameera2004 commented 3 years ago

Hi @effigies,

I uploaded to OpenNeuro and shared the dataset with you. Please let me know if you have any issues. Thanks Sameera

effigies commented 3 years ago

Hi @sameera2004, I didn't find an invitation. Could you share the link to your dataset?

sameera2004 commented 3 years ago

Hi @effigies,

Here's the link to data set https://openneuro.org/datasets/ds003130. I just sent another invitation. Thanks Sameera

effigies commented 3 years ago

Thanks. I can see the dataset and will have a look.

sameera2004 commented 3 years ago

Hi @effigies and @oesteban,

Any updates regarding this SDC issue?

Thanks Sameera

effigies commented 3 years ago

Sorry, these last couple weeks have been pretty full. I'll be looking into this this week.

effigies commented 3 years ago

Hi @sameera2004, we've had a look at this now. We want to verify the intended phase-encoding directions of each of these runs:

import nibabel as nb
import bids
layout = bids.BIDSLayout('/data/bids/ds003130-download/')
imgs = layout.get(extension='.nii.gz', datatype=['fmap', 'func'])
dirs = {"A": {"j": "PA", "j-": "AP"}}  # Truncated for simplicity
for bfile in imgs:
     img = bfile.get_image() 
     ornt = ''.join(nb.aff2axcodes(img.affine)) 
     md = bfile.get_metadata() 
     peaxis = md['PhaseEncodingDirection'] 
     pedir = dirs[ornt[1]][peaxis] 
     print(f"{bfile.filename:<46} {ornt} {peaxis:<3} {pedir}")

Output:

sub-50015_ses-1_dir-AP_run-1_epi.nii.gz        LAS j-  AP
sub-50015_ses-1_dir-PA_run-1_epi.nii.gz        LAS j   PA
sub-50015_ses-1_task-RSFC_run-1_bold.nii.gz    LAS j-  AP
sub-50015_ses-1_task-RSFC_run-1_sbref.nii.gz   LAS j-  AP
sub-50015_ses-1_task-RSFC_run-2_bold.nii.gz    LAS j   PA
sub-50015_ses-1_task-RSFC_run-2_sbref.nii.gz   LAS j   PA
sub-50015_ses-1_task-RSFC_run-3_bold.nii.gz    LAS j-  AP
sub-50015_ses-1_task-RSFC_run-3_sbref.nii.gz   LAS j-  AP
sub-50015_ses-1_task-RSFC_run-4_bold.nii.gz    LAS j   PA
sub-50015_ses-1_task-RSFC_run-4_sbref.nii.gz   LAS j   PA
sub-50015_ses-2_dir-AP_run-1_epi.nii.gz        LAS j-  AP
sub-50015_ses-2_dir-PA_run-1_epi.nii.gz        LAS j   PA
sub-50015_ses-2_task-RSFC_run-1_bold.nii.gz    LAS j-  AP
sub-50015_ses-2_task-RSFC_run-1_sbref.nii.gz   LAS j-  AP
sub-50015_ses-2_task-RSFC_run-2_bold.nii.gz    LAS j   PA
sub-50015_ses-2_task-RSFC_run-2_sbref.nii.gz   LAS j   PA
sub-50015_ses-2_task-RSFC_run-3_bold.nii.gz    LAS j-  AP
sub-50015_ses-2_task-RSFC_run-3_sbref.nii.gz   LAS j-  AP
sub-50015_ses-2_task-RSFC_run-4_bold.nii.gz    LAS j   PA
sub-50015_ses-2_task-RSFC_run-4_sbref.nii.gz   LAS j   PA

cc @mgxd @oesteban

jxvansne commented 3 years ago

That looks right, although Sameera can confirm for sure. We initially also thought that the PE direction may have been messed up, and actually re-processed a subject or two in fMRIprep with reverse PE from what we did initially. That made things far worse, so we concluded that we had the PE direction correct in the first place (which would anyway be consistent with the many other subjects we have whose SDC worked fine)

sameera2004 commented 3 years ago

Hi @effigies,

Yes these phase-encoding directions are correct. Thanks Sameera

oesteban commented 3 years ago

Okay, I had an in-depth into the reports the day before yesterday and I'm going to write this comment side-by-side with the report again. This is because I think fMRIPrep is doing very well in some of the runs and pretty bad in others, which is surprising. I have to say that the reports I'm looking at only show the RSFC task (btw, I think this should be rest to be BIDS compliant).

In the beginning, I thought that there is some problem with the PhaseEncodingDirection of some runs, and I reported that impression back to the fMRIPrep team. However, when coming back to the data I realized that I was wrong and @sameera2004 and @jxvansne are indeed reporting an issue with fMRIPrep.

Before digging in, I want to comment on a @jxvansne's previous message because it is very related to my current judgment on the issue:

I believe the two images she posted were an AP and a PA phase encode direction run (we alternate directions from one run to the next, HCP style) after SDC for a single subject.

AFAIK, the HCP encodings are LR and RL. Given the LR symmetry of most of the brains, it makes sense to acquire EPI in alternate encoding polarities (and then even combine them for the case of dMRI). However, I'd be wary of doing this when the encoding is along the P/A axis. Because of the dropout, the correction can be particularly bad in one of the polarities and I believe this is happening. I will refer to this below.

Now, let's dive in (reports are referred to https://oesteban.github.io/fmriprep-issue-2210/):

So I believe the fundamental difference/problem is that HCP Pipelines is using FSL TOPUP in this estimation and fMRIPrep uses AFNI's 3dQwarp. This is not to say that TOPUP is better than 3dQwarp, this is to say that, to get visually plausible results with this PE strategy, TOPUP is better - i.e., a visually plausible result does not mean a better result per se in this case, because the dropout will really make your fMRI very dubious in the affected regions anyways. That said, we will need to push for a TOPUP implementation of SDC within fMRIPrep because it is also likely that TOPUP is not just visually better in this case.

This also leads me to say one of the principles for fMRIPrep is not to replace HCP Pipelines [or insert any other very specific pipeline here]. If you are acquiring HCP-like data, then you'll probably be better off using HCP Pipelines.

sameera2004 commented 3 years ago

Hi @oesteban and @effigies

Thank you very much for testing our data and reporting it. Yes, we got same results when we use fMRIprep to preprocess sub-50015 data. We saw these issues on the same runs, sub-50015_ses-1_task-RSFC_run-2_desc-sdc_bold.svg and sub-50015_ses-1_task-RSFC_run-4_desc-sdc_bold.svg, sub-50015_ses-2_task-RSFC_run-2_desc-sdc_bold.svg and sub-50015_ses-2_task-RSFC_run-4_desc-sdc_bold.svg. No issues with other runs. We experienced similar issues on some other subjects/runs.

Are you planning to use FSL TOPUP implementation of SDC within fMRIprep in near future?

Best, Sameera

effigies commented 3 years ago

There's ongoing work on an implementation in https://github.com/nipreps/sdcflows/pull/106.

jxvansne commented 3 years ago

Hi, is work on this still ongoing? I realize @effigies referenced sdcflows #106 and #117 above, but both of those are closed and I'm not sure where I should be following to be notified if TOPUP gets an fMRIprep implementation (or any other workaround for the issue we're experiencing)?

Thanks again, -Jared

oesteban commented 3 years ago

Yes, SDCFlows is going through a pretty deep overhaul and that development version includes TOPUP. However, it will also require quite an effort to propagate the new API into fMRIPrep. For simplicity, we are testing the new solutions in https://github.com/nipreps/dmriprep first.

jmtyszka commented 3 years ago

Hi @oesteban Looks like fmriprep 20.2.1 LTS is really close to supporting TOPUP SDC right now. Just working through the dependencies, sdflows 2.0.x has the TOPUP workflow in place, but needs niworkflows 1.4.0rc5, while fmriprep/smriprep needs niworkflows ~=1.3.1. Do you have a development release of fmriprep which has these versioning issues resolved?

jxvansne commented 3 years ago

We're also eagerly awaiting this. An fMRIprep release that includes topup SDC would likely trigger reprocessing of most data in our lab with fMRIprep instead of HCP. We're close to starting final analysis on a couple of projects, so this could impact which pipeline we use in published work.

mgxd commented 3 years ago

There is an active branch where we're been testing this feature (https://github.com/nipreps/fmriprep/pull/2392), and I'd say it's fairly close. But because the shift includes a substantial change in which SDC algorithm is used, this will not be included in the LTS - rather it'll ship in the first minor release of the year (21.0.0)

jmtyszka commented 3 years ago

Fantastic news! Like @jxvansne this would be a game changer in our lab too.

effigies commented 2 years ago

Hi all, this should be resolved in the latest release. To test it out, install the latest with one of the following commands:

Docker wrapper: pip install fmriprep-docker==21.0.0rc0 Docker: docker pull nipreps/fmriprep:21.0.0rc0 Singularity: singularity build fmriprep-21.0.0rc0.simg docker://nipreps/fmriprep:21.0.0rc0

jmtyszka commented 2 years ago

Fantastic and thanks for all your work implementing this. We'll try 21.0.0 out shortly - fingers crossed topup fixes the OFC issues.

jxvansne commented 2 years ago

Exciting news. We'll work on testing this on the originally problematic data this week and next

julfou81 commented 2 years ago

Hi @effigies, I built the singularity image for the development version of fmriprep 21.0.0rc0 and tested on a dataset that was already preprocessed with fmriprep v20.2.2 and got the following error:

Here is an abstract of the output:

     [Node] Finished "fmriprep_wf.single_subject_16_wf.func_preproc_task_Production_run_04_wf.initial_boldref_wf.get_dummy".
210910-07:59:01,356 nipype.workflow INFO:
     [Node] Setting-up "fmriprep_wf.single_subject_12_wf.anat_preproc_wf.brain_extraction_wf.mrg_tmpl" in "/work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/anat_preproc_wf/brain_extraction_wf/mrg_tmpl".
210910-07:59:01,357 nipype.workflow INFO:
     [Node] Outdated cache found for "fmriprep_wf.single_subject_12_wf.anat_preproc_wf.brain_extraction_wf.mrg_tmpl".
210910-07:59:01,371 nipype.workflow INFO:
     [Node] Running "mrg_tmpl" ("nipype.interfaces.utility.base.Merge")
210910-07:59:01,378 nipype.workflow INFO:
     [Node] Finished "fmriprep_wf.single_subject_12_wf.anat_preproc_wf.brain_extraction_wf.mrg_tmpl".
210910-07:59:02,993 nipype.utils WARNING:
     No metadata was found in the pkl file. Make sure you are currently using the same Nipype version from the generated pkl.
210910-07:59:02,993 nipype.workflow CRITICAL:
     Can't get attribute 'ValidateImage' on <module 'niworkflows.interfaces.images' from '/usr/local/miniconda/lib/python3.8/site-packages/niworkflows/interfaces/images.py'>
210910-07:59:02,999 nipype.workflow WARNING:
     Error while checking node hash, forcing re-run. Although this error may not prevent the workflow from running, it could indicate a major problem. Please report a new issue at https://github.com/nipy/nipype/issues adding the following information:

    Node: fmriprep_wf.single_subject_12_wf.func_preproc_task_LocaliserLipTongue_run_01_wf.ds_report_validation
    Interface: fmriprep.interfaces.DerivativesDataSink
    Traceback:
Traceback (most recent call last):

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/base.py", line 347, in _local_hash_check
    cached, updated = self.procs[jobid].is_cached()

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 332, in is_cached
    hashed_inputs, hashvalue = self._get_hashval()

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 538, in _get_hashval
    self._get_inputs()

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 585, in _get_inputs
    raise RuntimeError(

RuntimeError: Error populating the inputs of node "ds_report_validation": the results file of the source node (/work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_01_wf/initial_boldref_wf/val_bold/result_val_bold.pklz) does not contain any outputs.

210910-07:59:03,1 nipype.utils WARNING:
     No metadata was found in the pkl file. Make sure you are currently using the same Nipype version from the generated pkl.
210910-07:59:03,1 nipype.workflow CRITICAL:
     Can't get attribute 'ValidateImage' on <module 'niworkflows.interfaces.images' from '/usr/local/miniconda/lib/python3.8/site-packages/niworkflows/interfaces/images.py'>
210910-07:59:03,38 nipype.utils WARNING:
     No metadata was found in the pkl file. Make sure you are currently using the same Nipype version from the generated pkl.
210910-07:59:03,38 nipype.workflow ERROR:
     Node ds_report_validation failed to run on host skylake036.cluster.
210910-07:59:03,41 nipype.workflow ERROR:
     Saving crash info to /work/Phonet/derivatives/fmriprep/sub-12/log/20210910-075533_af4a6701-fb01-4231-9be0-17da5baa9d27/crash-20210910-075903-jsein-ds_report_validation-f2dd69f8-0590-4122-a6ec-57a2da3038a2.txt
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 344, in _send_procs_to_workers
    self.procs[jobid].run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 443, in run
    cached, updated = self.is_cached()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 332, in is_cached
    hashed_inputs, hashvalue = self._get_hashval()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 538, in _get_hashval
    self._get_inputs()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 585, in _get_inputs
    raise RuntimeError(
RuntimeError: Error populating the inputs of node "ds_report_validation": the results file of the source node (/work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_01_wf/initial_boldref_wf/val_bold/result_val_bold.pklz) does not contain any outputs.

During the creation of this crashfile triggered by the above exception,
another exception occurred:

Each element of the 'out_file' trait of a _DerivativesDataSinkOutputSpec instance must be a pathlike object or string representing an existing file, but a value of '/work/Phonet/derivatives/fmriprep/sub-12/figures/sub-12_task-LocaliserLipTongue_run-1_desc-validation_bold.html' <class 'str'> was specified..
210910-07:59:03,356 nipype.utils WARNING:
     No metadata was found in the pkl file. Make sure you are currently using the same Nipype version from the generated pkl.
210910-07:59:03,356 nipype.workflow CRITICAL:
     Can't get attribute 'ValidateImage' on <module 'niworkflows.interfaces.images' from '/usr/local/miniconda/lib/python3.8/site-packages/niworkflows/interfaces/images.py'>
210910-07:59:03,359 nipype.workflow WARNING:
     Error while checking node hash, forcing re-run. Although this error may not prevent the workflow from running, it could indicate a major problem. Please report a new issue at https://github.com/nipy/nipype/issues adding the following information:

    Node: fmriprep_wf.single_subject_12_wf.func_preproc_task_LocaliserLipTongue_run_01_wf.bold_split
    Interface: nipype.interfaces.fsl.utils.Split
    Traceback:
Traceback (most recent call last):

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/base.py", line 347, in _local_hash_check
    cached, updated = self.procs[jobid].is_cached()

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 332, in is_cached
    hashed_inputs, hashvalue = self._get_hashval()

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 538, in _get_hashval
    self._get_inputs()

  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 585, in _get_inputs
    raise RuntimeError(

and from the error:

exception calling callback for <Future at 0x2aaace13ad90 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 486, in run
    self._get_hashval()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 538, in _get_hashval
    self._get_inputs()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 585, in _get_inputs
    raise RuntimeError(
RuntimeError: Error populating the inputs of node "merge_sbrefs": the results file of the source node (/work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_01_wf/initial_boldref_wf/val_sbref/result_val_sbref.pklz) does not contain any outputs.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 70, in run_node
    result["result"] = node.result
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 216, in result
    return _load_resultfile(
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/utils.py", line 291, in load_resultfile
    raise FileNotFoundError(results_file)
FileNotFoundError: /work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_01_wf/initial_boldref_wf/merge_sbrefs/result_merge_sbrefs.pklz
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
FileNotFoundError: /work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_01_wf/initial_boldref_wf/merge_sbrefs/result_merge_sbrefs.pklz
exception calling callback for <Future at 0x2aaacd8cb3d0 state=finished raised FileNotFoundError>
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 486, in run
    self._get_hashval()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 538, in _get_hashval
    self._get_inputs()
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 585, in _get_inputs
    raise RuntimeError(
RuntimeError: Error populating the inputs of node "merge_sbrefs": the results file of the source node (/work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_02_wf/initial_boldref_wf/val_sbref/result_val_sbref.pklz) does not contain any outputs.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 70, in run_node
    result["result"] = node.result
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 216, in result
    return _load_resultfile(
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/utils.py", line 291, in load_resultfile
    raise FileNotFoundError(results_file)
FileNotFoundError: /work/temp_data_Phonet/fmriprep_wf/single_subject_12_wf/func_preproc_task_LocaliserLipTongue_run_02_wf/initial_boldref_wf/merge_sbrefs/result_merge_sbrefs.pklz
mgxd commented 2 years ago

@julfou81 since there have been substantial changes to the workflow structure, we recommend using a new working directory

julfou81 commented 2 years ago

Good call! I forgot to delete the existing working directory. I will do that and try again.

sameera2004 commented 2 years ago

@mgxd, @effigies I also built the singularity image from the development version. I created a new working directory to save the outputs. However, I getting following error

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 328, in _invoke_callbacks
    callback(self)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 159, in _async_callback
    result = args.result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 432, in result
    return self.__get_result()
  File "/usr/local/miniconda/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
    raise self._exception
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

I was trying to process about 10 subjects. I think I am getting this similar error for each job. Now I am trying to only process one subject.

Do you know why I am getting the error?

@effigies were you able to process the problematic subject data I shared with you about a year ago? If you were able to process that data can you please share that results with us?

Thanks Sameera

mgxd commented 2 years ago

https://fmriprep.org/en/stable/faq.html#my-fmriprep-run-is-hanging

Fastest solution is to add more memory to the job.

effigies commented 2 years ago

@sameera2004 Yes, I still have ds3130. I will set that to run tonight and share the results Monday.

julfou81 commented 2 years ago

Testing is in progress, it is running well. However I noticed a difference already in the bold summary for each run (same dataset in each case):

The dataset is somewhat special, as it has in the fmap folder both data for the pe-polar method and the fieldmap base method. It looks like the heuristic for picking up the susceptibility correction is different between the two fmriprep version. Can you confirm that?

effigies commented 2 years ago

It looks like the heuristic for picking up the susceptibility correction is different between the two fmriprep version. Can you confirm that?

Yes, these have changed.

Edit:

Old order:

New order:

julfou81 commented 2 years ago

ok, good to know. Out of pure curiosity: is there any particular rationale for this change? From what I read in the literature, both methods has their own advantages and drawbacks, it seems hard anyway to really pick one over the other.

EDIT: oh I see, pepolar went really down in the list. I am even more curious about this new choice. ;-)

effigies commented 2 years ago

I think the order is based on distance from a direct measurement of the B0 field, but I'm pretty sure @oesteban is the one who set the order and better placed to respond.

mgxd commented 2 years ago

I'm not sure of the rational behind the change, but just to clarify - the user is still given full control on which SDC method is used. Priority can be assigned by either: 1) Using the B0FieldIdentifier/B0NameSource fields (new BIDS metadata that is already supported by fMRIPrep). 2) Removing the IntendedFor metadata field from all fieldmaps not desired for correction.

I highly recommend shifting to option 1 as it is a much cleaner solution.

sameera2004 commented 2 years ago

Thanks @mgxd and @effigies. @effigies were you able to process the data? I am trying to process the same subject data and I am getting the following error.

Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node result["result"] = node.run(updatehash=updatehash) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run result = self._run_interface(execute=True) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface return self._run_command(execute) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command result = self._interface.run(cwd=outdir) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 406, in run version=self.version, File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/workbench/base.py", line 39, in version return Info.version() File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 1127, in version clout = CommandLine( File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 428, in run runtime = self._run_interface(runtime) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 822, in _run_interface self.raise_exception(runtime) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 749, in raise_exception raise RuntimeError( RuntimeError: Command: wb_command -version Standard output:

Standard error: wb_command: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory Return code: 127

I am also processing one other subject data and I am getting the following error for that.

allow_4D: True in_files: Traceback (most recent call last): File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node result["result"] = node.run(updatehash=updatehash) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run result = self._run_interface(execute=True) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface return self._run_command(execute) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command result = self._interface.run(cwd=outdir) File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 428, in run runtime = self._run_interface(runtime) File "/usr/local/miniconda/lib/python3.8/site-packages/niworkflows/interfaces/nibabel.py", line 191, in _run_interface img_4d = nb.concat_images(nii_list) File "/usr/local/miniconda/lib/python3.8/site-packages/nibabel/funcs.py", line 141, in concat_images raise ValueError(f'Affine for image {i} does not match affine for first image') ValueError: Affine for image 3 does not match affine for first image

Can you please advise me to fix these issues?

julfou81 commented 2 years ago

I completed the first test with v21.0.0rc0 on data for which I got already preprocessing results with fmriprep v20.2.2. It worked well. Here are the differences I noted in the HTML report compared with fmriprep 20.2.2

It is great to have now topup implemented in fmriprep! Thank you for the good job!

oesteban commented 2 years ago

Thanks much for the thorough reporting!

Here are the differences I noted in the HTML report compared with fmriprep 20.2.2

* the number of estimated non-steady state volumes was different, going from 0 with v20 to 1 or 2 in v21.

There are some changes in how we calculate non-steady states - this is unsurprising, but we probably want to have this looked into more carefully.

* the display for **susceptibility distorsion correction** is different: outline of the brainmask in red for  v21 and outline of the GM/WM in blue for v20

Yes, unless you meant there's a problem here, I think this was intended (I believe @mgxd made some changes towards this adjustment).

* **Alignment of functional and anatomical MRI data (surface driven)**: same thing except that the WM mask is displayed in red in V21 and blue in v20

Same as above, trying to make the results more visible.

* **Brain mask and (anatomical/temporal) CompCor ROIs**: the image used as a background for for the CompCor ROIs is the non corrected image (visible because the brain is going well out of the brainmask outile) in v21, whereas the corrected image in v20: it looks like a mistake in v21.

Yes, could you file an issue reporting this as a bug?

  in the **Methods** section, the title for _Anatomical data preprocessing_ is not in bold.

Yup, @mgxd is working on this.

effigies commented 2 years ago

Running ds003130 (@sameera2004's dataset), something seems to be going wrong:

Raw

sub-50015_ses-1_task-RSFC_run-1_bold

Preproc

sub-50015_ses-1_task-RSFC_run-1_space-T1w_desc-preproc_bold

Command

fmriprep-docker /data/bids/ds003130-download /data/out/ds003130-fmriprep_21.0.0rc0 participant --ignore slicetiming --fs-no-reconall -vv -w /data/scratch/ds003130-fmriprep21 --output-spaces T1w
sameera2004 commented 2 years ago

Thanks @effigies.

following is the command I used to process the data.

singularity run --cleanenv --bind /mnt/fMRIprep /mnt/tools/fMRIprep_sing/fmriprep-21.0.0rc0.simg /mnt/fMRIprep/scR21 /mnt/fMRIprep/scR21_outputs_21_Sep_2021 -w /mnt/fMRIprep/scR21_outputs_21_Sep_2021/derivatives/scratch --fs-license-file $PWD/license.txt --write-graph --cifti-output 91k participant --participant-label sub-50015 --ignore slicetiming --output-spaces T1w MNI152NLin2009cAsym:res-2 fsaverage fsLR

But I am getting following error, Standard error: ( please see my previous comment for more information) wb_command: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory Return code: 127

I am not sure how to fix it. Please let me know if I am using the correct command.

oesteban commented 2 years ago

singularity run --cleanenv --bind /mnt/fMRIprep /mnt/tools/fMRIprep_sing/fmriprep-21.0.0rc0.simg /mnt/fMRIprep/scR21 /mnt/fMRIprep/scR21_outputs_21_Sep_2021 -w /mnt/fMRIprep/scR21_outputs_21_Sep_2021/derivatives/scratch --fs-license-file $PWD/license.txt --write-graph --cifti-output 91k participant --participant-label sub-50015 --ignore slicetiming --output-spaces T1w MNI152NLin2009cAsym:res-2 fsaverage fsLR

But I am getting following error, Standard error: ( please see my previous comment for more information) wb_command: error while loading shared libraries: libQt5Core.so.5: cannot open shared object file: No such file or directory Return code: 127

Looks like a missing dependency for the workbench. paging @mgxd.

@effigies - can you post the reportlet of the fieldmap? That one should show the corrected original EPIs, before applying on any target dataset.

If those look alright, then I believe this is a replication of what @mgxd is seeing with his visual task dataset (nipreps/sdcflows#218)

effigies commented 2 years ago

Let's move the singularity/workbench/libQt5Core.so.5 conversation over to #2534. It's not a missing dependency but an interaction with an installed library (I checked in Docker) with Singularity. An ABI tag mismatch seems like a plausible source of failure.

effigies commented 2 years ago

Here are the two SVGs.

sub-50015.zip

oesteban commented 2 years ago

sub-50015_ses-1_run-1_fmapid-auto00000_desc-pepolar_fieldmap.svg - reference doesn't look so terrible as the example above. I'm inclined to think this is nipreps/sdcflows#218.

oesteban commented 2 years ago

Okay, I ran the current master and I'd say this is not nipreps/sdcflows#218. Looking at the distorted/corrected EPI reportlets it seems to me that correction is happening in the right direction. However, we are feeding the wrong files to the derivatives datasink, to the confounds reporting workflow (and because of these two, probably the confounds workflow itself too).

I'll come back later with news about this issue.