PennLINC / qsiprep

Preprocessing of diffusion MRI
http://qsiprep.readthedocs.io
BSD 3-Clause "New" or "Revised" License
138 stars 55 forks source link

DWI concatenation issue #106

Closed araikes closed 4 years ago

araikes commented 4 years ago

Related to the dataset from #98, I get the following when concatenating (ses-bline shown but ses-ptx has the same error reported):

Node: qsiprep_wf.single_subject_allo103_wf.dwi_preproc_ses_bline_acq_b_wf.pre_hmc_wf.merge_and_denoise_wf.dwi_qc_wf.concat_raw_dwis
Working directory: /data/allo/derivatives/qsiprep-0.8.0RC2/test/scratch/qsiprep_wf/single_subject_allo103_wf/dwi_preproc_ses_bline_acq_b_wf/pre_hmc_wf/merge_and_denoise_wf/dwi_qc_wf/concat_raw_dwis

Node inputs:

compress = True
dtype = f4
header_source = <undefined>
in_files = ['/nifti/sub-allo103/ses-bline/dwi/sub-allo103_ses-bline_acq-b1000_dir-PA_run-001_dwi.nii.gz', '/nifti/sub-allo103/ses-bline/dwi/sub-allo103_ses-bline_acq-b2000_dir-PA_run-001_dwi.nii.gz']
is_dwi = True

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/interfaces/nilearn.py", line 133, in _run_interface
    new_nii = concat_imgs(self.inputs.in_files, dtype=self.inputs.dtype)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg_conversions.py", line 459, in concat_niimgs
    memory=memory, memory_level=memory_level))):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg_conversions.py", line 150, in _iter_check_niimg
    niimg.shape))
ValueError: Field of view of image #1 is different from reference FOV.
Reference affine:
array([[ -0.85939997,   0.        ,  -0.        , 106.85299683],
       [ -0.        ,   0.85939997,  -0.        , -87.10998535],
       [  0.        ,   0.        ,   3.        , -91.61499786],
       [  0.        ,   0.        ,   0.        ,   1.        ]])
Image affine:
array([[ -0.85939997,   0.        ,  -0.        , 109.56999969],
       [ -0.        ,   0.85939997,  -0.        , -86.47598267],
       [  0.        ,   0.        ,   3.        , -92.52089691],
       [  0.        ,   0.        ,   0.        ,   1.        ]])
Reference shape:
(256, 256, 50)
Image shape:
(256, 256, 50, 21)

mrinfo on the dwis:

************************************************
Image:               "sub-allo103_ses-bline_acq-b1000_dir-PA_run-001_dwi.nii.gz"
************************************************
  Dimensions:        256 x 256 x 50 x 26
  Voxel size:        0.8594 x 0.8594 x 3 x 14
  Data strides:      [ -1 2 3 4 ]
  Format:            NIfTI-1.1 (GZip compressed)
  Data type:         signed 16 bit integer (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                    1           0          -0      -112.3
                                0           1          -0      -87.11
                               -0           0           1      -91.61
  comments:          TE=87;Time=164022.000
************************************************
Image:               "sub-allo103_ses-bline_acq-b2000_dir-PA_run-001_dwi.nii.gz"
************************************************
  Dimensions:        256 x 256 x 50 x 21
  Voxel size:        0.8594 x 0.8594 x 3 x 17
  Data strides:      [ -1 2 3 4 ]
  Format:            NIfTI-1.1 (GZip compressed)
  Data type:         signed 16 bit integer (little endian)
  Intensity scaling: offset = 0, multiplier = 1
  Transform:                    1           0          -0      -109.6
                                0           1          -0      -86.48
                               -0           0           1      -92.52
  comments:          TE=99;Time=164858.000
mattcieslak commented 4 years ago

It looks like you are trying to concatenate two dwi series that have different FoVs (you can see different numbers in the offset). Would you be ok with resampling the second series into the FoV of the first? You'll have to be sure to use --denoise-before-combining because the second half of your data will be interpolated after combining the two runs

araikes commented 4 years ago

I don't think there'd be any issue resampling. I'll give a shot and see.

araikes commented 4 years ago

Using the --denoise-before-combining yields the same error. Additionally further steps (i.e., mrdegibbs and biascorr) are being run separately for each shell acquisition.

araikes commented 4 years ago

To follow up on this, it looks like it did, in fact, concatenate the images because the output folder has them concatenate into a single image with the expected dimensions. However, the report did not get produced:

         [Node] Running "series_qc" ("qsiprep.interfaces.reports.SeriesQC")
200227-16:10:00,898 nipype.workflow INFO:
         [Node] Finished "qsiprep_wf.single_subject_allo104_wf.dwi_finalize_ses_bline_acq_b_wf.series_qc".
200227-16:10:10,937 nipype.workflow INFO:
         [Node] Finished "qsiprep_wf.single_subject_allo104_wf.dwi_finalize_ses_bline_acq_b_wf.transform_dwis_t1.final_b0_ref.b0ref_reportlet".
200227-16:10:14,548 nipype.workflow ERROR:
         could not run node: qsiprep_wf.single_subject_allo104_wf.dwi_preproc_ses_ptx_acq_b_wf.pre_hmc_wf.dwi_qc_wf.concat_raw_dwis
200227-16:10:14,554 nipype.workflow ERROR:
         could not run node: qsiprep_wf.single_subject_allo104_wf.dwi_preproc_ses_ptx_acq_b_wf.pre_hmc_wf.merge_and_denoise_wf.dwi_qc_wf.concat_raw_dwis
QSIPrep failed: Workflow did not execute cleanly. Check log for details
Traceback (most recent call last):
  File "/usr/local/miniconda/bin/qsiprep", line 8, in <module>
    sys.exit(main())
  File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/cli/run.py", line 566, in main
    qsiprep_wf.run(**plugin_settings)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/workflows.py", line 599, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/base.py", line 191, in run
    report_nodes_not_run(notrun)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/tools.py", line 82, in report_nodes_not_run
    raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details
araikes commented 4 years ago

Any recommendations for getting the report to generate here @mattcieslak?

Thanks

araikes commented 4 years ago

As a further follow up, finally had an opportunity to run the full dataset. It appears that if the FoVs are different, the report does not run, though the merged file appears to be written correctly.

However, if the FoVs are the same, it runs without issue (though for a group run with the same parameters, this means denoising each image separately when in fact they could be concatenated first).

RobertJirsaraie commented 4 years ago

@araikes, I think I am running into the same issues where the dimensions and voxel sizes are not the same across all my dwi runs. I tried resampling all my nifitis using the for loop below, but when I check mrinfo there are still different numbers in the offset and the same qsiprep error occurs. Would you be able to share with me the commands you used to resolve this issue?

for scan in ls /dfs2/yassalab/rjirsara/ConteCenterScripts/Conte-One/bids/*/*/dwi/*.nii.gz ; do mrresize -size 84,128,128 ${scan} test.nii.gz 3drefit -zdel 1.69 test.nii.gz mv test.nii.gz ${scan} chmod ug+wrx ${scan} done

Screen Shot 2020-04-04 at 11 48 41 AM

RobertJirsaraie commented 4 years ago

Hi @araikes or @mattcieslak,

I was just wondering if there was any code or commands that would resolve this issue. I am still stuck at this step and would greatly appreciate anything you could share with me.

araikes commented 4 years ago

Hi @RobertJirsaraie

I did in fact get this working (as of yesterday). The code is a bit non-BIDS compliant because I was more interested in getting it to work than I was in getting it fully compliant (i.e., my output file name shouldn't be the same as the input, but I'm doing this from a nifti copy in my derivatives folder where I've also deobliqued the T1s... so with sufficient provenance, I see no problem in the short term). I'll let @mattcieslak chime in if he sees something that doesn't work about this, but I'm getting reports now for my whole dataset.

I'm running this in a docker environment that just has python, the Ni-family (nibabel, nilearn, nistats, nipype), pybids, and DIPY. Here I'm resampling the b2000 shell into the b1000 shell's FoV.

#!/usr/bin/env python

import nibabel as nib
import numpy as np
from os.path import join
from nilearn import image as img
import json
from bids.layout import BIDSLayout

# Get BIDS layout
layout = BIDSLayout(root = '/data', validate = False)

subjects = layout.get_subjects()
sessions = layout.get_sessions()

# Initialize JSON data
resample_json = {}
resample_json["participantid"] = []

for subj in subjects:
    for sesh in sessions:
        b1000_file = layout.get(subject=subj, session = sesh,
                               suffix = "dwi", acquisition = "b1000",
                               extension = "nii.gz",
                               return_type = 'filename')

        b2000_file = layout.get(subject=subj, session = sesh,
                       suffix = "dwi", acquisition = "b2000",
                       extension = "nii.gz",
                       return_type = 'filename')

        if not b1000_file or not b2000_file:
            resample_json['participantid'].append({
                'participantid': subj,
                'session': sesh,
                'status': 'Incomplete acquisition capture'})
            continue

        b1000 = nib.load(b1000_file[0])
        b2000 = nib.load(b2000_file[0])
        check_fov = b1000.affine == b2000.affine

        if np.all(check_fov):
            resample_json['participantid'].append({
                'participantid': subj,
                'session': sesh,
                'status': 'FOV matches. Resampling not required.'})
            continue
        else:
            b2000_resample = img.resample_to_img(b2000, b1000)
            nib.save(b2000_resample, b2000_file[0])

            resample_json['participantid'].append({
                'participantid': subj,
                'session': sesh,
                'status': 'FOVs did not match. b2000 acquisition resampled.'})

with open('/data/resampling.json', 'w') as outfile:
    json.dump(resample_json, outfile, indent = 4)
mattcieslak commented 4 years ago

Hi @araikes and @RobertJirsaraie, I just realized that the easiest and most valid way to get around this (assuming none of your images are acquired obliquely) is to copy the affines from the first image into the second. If you resample the image before running qsiprep, the assumptions made by mrdegibbs and dwidenoise are going to be violated.

araikes commented 4 years ago

Hi @mattcieslak

So then, do you have a recommendation on transferring the affines?

mattcieslak commented 4 years ago

You should be able to do something like

import nibabel as nb
first_img = nb.load("sub-1_run-1_dwi.nii.gz")
second_img = nb.load("sub-1_run-2_dwi.nii.gz")
fixed_second_img = nb.Nifti1Image(second_img.get_data(), first_img.affine, header=first_img.header)
fixed_second_img.to_filename("sub-1_run-2_acq-fixedhdr_dwi.nii.gz")

And then run qsiprep with the original sub-1_run-2_dwi.nii.gz removed. I would still strongly suggest using --denoise-before-combining.

araikes commented 4 years ago

I'll give it a run and see.

araikes commented 4 years ago

Looking at the headers for my files, they are different in several areas. Should I be using the header from the b1000 acquistion in the re-FoV'd b2000? I would assume that the header should be the second img header, since it was just the offsets that didn't match.

b1000:

sizeof_hdr      : 348
data_type       : b''
db_name         : b''
extents         : 0
session_error   : 0
regular         : b'r'
dim_info        : 57
dim             : [  4 256 256  50  26   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : int16
bitpix          : 16
slice_start     : 0
pixdim          : [-1.      0.8594  0.8594  3.     14.      0.      0.      0.    ]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b'TE=89;Time=163847.000'
aux_file        : b''
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 1.0
quatern_d       : 0.0
qoffset_x       : 110.207
qoffset_y       : -83.30798
qoffset_z       : -82.3748
srow_x          : [ -0.8594   0.      -0.     110.207 ]
srow_y          : [ -0.        0.8594   -0.      -83.30798]
srow_z          : [  0.       0.       3.     -82.3748]
intent_name     : b''
magic           : b'n+1'

b2000:

sizeof_hdr      : 348
data_type       : b''
db_name         : b''
extents         : 0
session_error   : 0
regular         : b'r'
dim_info        : 57
dim             : [  4 256 256  50  21   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : none
datatype        : int16
bitpix          : 16
slice_start     : 0
pixdim          : [-1.      0.8594  0.8594  3.     17.      0.      0.      0.    ]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : b'TE=1e+02;Time=163057.000'
aux_file        : b''
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 1.0
quatern_d       : 0.0
qoffset_x       : 110.207
qoffset_y       : -83.30798
qoffset_z       : -82.3748
srow_x          : [ -0.8594   0.      -0.     110.207 ]
srow_y          : [ -0.        0.8594   -0.      -83.30798]
srow_z          : [  0.       0.       3.     -82.3748]
intent_name     : b''
magic           : b'n+1'
cookpa commented 4 years ago

Some DICOM converters add extra information to the pixel dimensions (like repetition time for BOLD data). It looks like that's where your headers vary.

Can you solve the problem by zeroing out the pixdims after the first three?

araikes commented 4 years ago

They also vary on the actual dimensions (particularly, number of directions acquired) as well as the TE/TR description (I know it's a string).

Pragmatic question: If the solution is to zero out the pixdim differences (ignoring the other differences), then does it actually matter which header is used since all of the other information is the same (again, ignoring the other differences noted above)?

mattcieslak commented 4 years ago

The TE/TR are gathered from the json sidecars, so the nifti headers won't make a difference

araikes commented 4 years ago

That's what I figured. Thanks @mattcieslak.

RobertJirsaraie commented 4 years ago

@mattcieslak, your code worked for me! I applied the affine transform to my nifitis that were already resampled to have the same number of voxels and voxel dimensions across all runs. Thank you very much!

araikes commented 4 years ago

Hi @mattcieslak, That code worked for me too. Successfully processed from nifti to tcks. Thanks.