Closed araikes closed 4 years ago
It looks like you are trying to concatenate two dwi series that have different FoVs (you can see different numbers in the offset). Would you be ok with resampling the second series into the FoV of the first? You'll have to be sure to use --denoise-before-combining because the second half of your data will be interpolated after combining the two runs
I don't think there'd be any issue resampling. I'll give a shot and see.
Using the --denoise-before-combining
yields the same error. Additionally further steps (i.e., mrdegibbs
and biascorr
) are being run separately for each shell acquisition.
To follow up on this, it looks like it did, in fact, concatenate the images because the output folder has them concatenate into a single image with the expected dimensions. However, the report did not get produced:
[Node] Running "series_qc" ("qsiprep.interfaces.reports.SeriesQC")
200227-16:10:00,898 nipype.workflow INFO:
[Node] Finished "qsiprep_wf.single_subject_allo104_wf.dwi_finalize_ses_bline_acq_b_wf.series_qc".
200227-16:10:10,937 nipype.workflow INFO:
[Node] Finished "qsiprep_wf.single_subject_allo104_wf.dwi_finalize_ses_bline_acq_b_wf.transform_dwis_t1.final_b0_ref.b0ref_reportlet".
200227-16:10:14,548 nipype.workflow ERROR:
could not run node: qsiprep_wf.single_subject_allo104_wf.dwi_preproc_ses_ptx_acq_b_wf.pre_hmc_wf.dwi_qc_wf.concat_raw_dwis
200227-16:10:14,554 nipype.workflow ERROR:
could not run node: qsiprep_wf.single_subject_allo104_wf.dwi_preproc_ses_ptx_acq_b_wf.pre_hmc_wf.merge_and_denoise_wf.dwi_qc_wf.concat_raw_dwis
QSIPrep failed: Workflow did not execute cleanly. Check log for details
Traceback (most recent call last):
File "/usr/local/miniconda/bin/qsiprep", line 8, in <module>
sys.exit(main())
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/cli/run.py", line 566, in main
qsiprep_wf.run(**plugin_settings)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/workflows.py", line 599, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/base.py", line 191, in run
report_nodes_not_run(notrun)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/tools.py", line 82, in report_nodes_not_run
raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details
Any recommendations for getting the report to generate here @mattcieslak?
Thanks
As a further follow up, finally had an opportunity to run the full dataset. It appears that if the FoVs are different, the report does not run, though the merged file appears to be written correctly.
However, if the FoVs are the same, it runs without issue (though for a group run with the same parameters, this means denoising each image separately when in fact they could be concatenated first).
@araikes, I think I am running into the same issues where the dimensions and voxel sizes are not the same across all my dwi runs. I tried resampling all my nifitis using the for loop below, but when I check mrinfo there are still different numbers in the offset and the same qsiprep error occurs. Would you be able to share with me the commands you used to resolve this issue?
for scan in ls /dfs2/yassalab/rjirsara/ConteCenterScripts/Conte-One/bids/*/*/dwi/*.nii.gz
; do
mrresize -size 84,128,128 ${scan} test.nii.gz
3drefit -zdel 1.69 test.nii.gz
mv test.nii.gz ${scan}
chmod ug+wrx ${scan}
done
Hi @araikes or @mattcieslak,
I was just wondering if there was any code or commands that would resolve this issue. I am still stuck at this step and would greatly appreciate anything you could share with me.
Hi @RobertJirsaraie
I did in fact get this working (as of yesterday). The code is a bit non-BIDS compliant because I was more interested in getting it to work than I was in getting it fully compliant (i.e., my output file name shouldn't be the same as the input, but I'm doing this from a nifti copy in my derivatives folder where I've also deobliqued the T1s... so with sufficient provenance, I see no problem in the short term). I'll let @mattcieslak chime in if he sees something that doesn't work about this, but I'm getting reports now for my whole dataset.
I'm running this in a docker environment that just has python, the Ni-family (nibabel, nilearn, nistats, nipype), pybids, and DIPY. Here I'm resampling the b2000 shell into the b1000 shell's FoV.
#!/usr/bin/env python
import nibabel as nib
import numpy as np
from os.path import join
from nilearn import image as img
import json
from bids.layout import BIDSLayout
# Get BIDS layout
layout = BIDSLayout(root = '/data', validate = False)
subjects = layout.get_subjects()
sessions = layout.get_sessions()
# Initialize JSON data
resample_json = {}
resample_json["participantid"] = []
for subj in subjects:
for sesh in sessions:
b1000_file = layout.get(subject=subj, session = sesh,
suffix = "dwi", acquisition = "b1000",
extension = "nii.gz",
return_type = 'filename')
b2000_file = layout.get(subject=subj, session = sesh,
suffix = "dwi", acquisition = "b2000",
extension = "nii.gz",
return_type = 'filename')
if not b1000_file or not b2000_file:
resample_json['participantid'].append({
'participantid': subj,
'session': sesh,
'status': 'Incomplete acquisition capture'})
continue
b1000 = nib.load(b1000_file[0])
b2000 = nib.load(b2000_file[0])
check_fov = b1000.affine == b2000.affine
if np.all(check_fov):
resample_json['participantid'].append({
'participantid': subj,
'session': sesh,
'status': 'FOV matches. Resampling not required.'})
continue
else:
b2000_resample = img.resample_to_img(b2000, b1000)
nib.save(b2000_resample, b2000_file[0])
resample_json['participantid'].append({
'participantid': subj,
'session': sesh,
'status': 'FOVs did not match. b2000 acquisition resampled.'})
with open('/data/resampling.json', 'w') as outfile:
json.dump(resample_json, outfile, indent = 4)
Hi @araikes and @RobertJirsaraie, I just realized that the easiest and most valid way to get around this (assuming none of your images are acquired obliquely) is to copy the affines from the first image into the second. If you resample the image before running qsiprep, the assumptions made by mrdegibbs and dwidenoise are going to be violated.
Hi @mattcieslak
So then, do you have a recommendation on transferring the affines?
You should be able to do something like
import nibabel as nb
first_img = nb.load("sub-1_run-1_dwi.nii.gz")
second_img = nb.load("sub-1_run-2_dwi.nii.gz")
fixed_second_img = nb.Nifti1Image(second_img.get_data(), first_img.affine, header=first_img.header)
fixed_second_img.to_filename("sub-1_run-2_acq-fixedhdr_dwi.nii.gz")
And then run qsiprep with the original sub-1_run-2_dwi.nii.gz removed. I would still strongly suggest using --denoise-before-combining
.
I'll give it a run and see.
Looking at the headers for my files, they are different in several areas. Should I be using the header from the b1000 acquistion in the re-FoV'd b2000? I would assume that the header should be the second img header, since it was just the offsets that didn't match.
b1000:
sizeof_hdr : 348
data_type : b''
db_name : b''
extents : 0
session_error : 0
regular : b'r'
dim_info : 57
dim : [ 4 256 256 50 26 1 1 1]
intent_p1 : 0.0
intent_p2 : 0.0
intent_p3 : 0.0
intent_code : none
datatype : int16
bitpix : 16
slice_start : 0
pixdim : [-1. 0.8594 0.8594 3. 14. 0. 0. 0. ]
vox_offset : 0.0
scl_slope : nan
scl_inter : nan
slice_end : 0
slice_code : unknown
xyzt_units : 10
cal_max : 0.0
cal_min : 0.0
slice_duration : 0.0
toffset : 0.0
glmax : 0
glmin : 0
descrip : b'TE=89;Time=163847.000'
aux_file : b''
qform_code : scanner
sform_code : scanner
quatern_b : 0.0
quatern_c : 1.0
quatern_d : 0.0
qoffset_x : 110.207
qoffset_y : -83.30798
qoffset_z : -82.3748
srow_x : [ -0.8594 0. -0. 110.207 ]
srow_y : [ -0. 0.8594 -0. -83.30798]
srow_z : [ 0. 0. 3. -82.3748]
intent_name : b''
magic : b'n+1'
b2000:
sizeof_hdr : 348
data_type : b''
db_name : b''
extents : 0
session_error : 0
regular : b'r'
dim_info : 57
dim : [ 4 256 256 50 21 1 1 1]
intent_p1 : 0.0
intent_p2 : 0.0
intent_p3 : 0.0
intent_code : none
datatype : int16
bitpix : 16
slice_start : 0
pixdim : [-1. 0.8594 0.8594 3. 17. 0. 0. 0. ]
vox_offset : 0.0
scl_slope : nan
scl_inter : nan
slice_end : 0
slice_code : unknown
xyzt_units : 10
cal_max : 0.0
cal_min : 0.0
slice_duration : 0.0
toffset : 0.0
glmax : 0
glmin : 0
descrip : b'TE=1e+02;Time=163057.000'
aux_file : b''
qform_code : scanner
sform_code : scanner
quatern_b : 0.0
quatern_c : 1.0
quatern_d : 0.0
qoffset_x : 110.207
qoffset_y : -83.30798
qoffset_z : -82.3748
srow_x : [ -0.8594 0. -0. 110.207 ]
srow_y : [ -0. 0.8594 -0. -83.30798]
srow_z : [ 0. 0. 3. -82.3748]
intent_name : b''
magic : b'n+1'
Some DICOM converters add extra information to the pixel dimensions (like repetition time for BOLD data). It looks like that's where your headers vary.
Can you solve the problem by zeroing out the pixdims after the first three?
They also vary on the actual dimensions (particularly, number of directions acquired) as well as the TE/TR description (I know it's a string).
Pragmatic question: If the solution is to zero out the pixdim differences (ignoring the other differences), then does it actually matter which header is used since all of the other information is the same (again, ignoring the other differences noted above)?
The TE/TR are gathered from the json sidecars, so the nifti headers won't make a difference
That's what I figured. Thanks @mattcieslak.
@mattcieslak, your code worked for me! I applied the affine transform to my nifitis that were already resampled to have the same number of voxels and voxel dimensions across all runs. Thank you very much!
Hi @mattcieslak, That code worked for me too. Successfully processed from nifti to tcks. Thanks.
Related to the dataset from #98, I get the following when concatenating (
ses-bline
shown butses-ptx
has the same error reported):mrinfo
on the dwis: