Closed Heechberri closed 10 months ago
It looks like the file is damaged. Have you inspected /data/sub-EXCI0153/ses-1/func/sub-EXCI0153_ses-1_task-rest_bold.nii.gz
with a third-party tool? My guess is if you try to look at the last volume, you'll get an error in the other tool. This can happen if a file failed to be fully copied. Re-copying it from its source (or re-converting it, if the source is DICOM) should resolve it.
Hi effigies :)
Thanks for the quick reply.
I did inspect the file in both MRIcron and fsleyes, and they didn't appear to be different than the other files. Could I send you the file if you don't mind?
I wil try exporting them from the scanner and converting them again as well :)
If you can run fsleyes
, you should also be able to run:
python -c "import nibabel as nb; nb.load('/data/sub-EXCI0153/ses-1/func/sub-EXCI0153_ses-1_task-rest_bold.nii.gz').get_fdata()"
I would expect this to have the same failure case as get_dummy()
. If that succeeds, I would try re-running, as there may have been some one-off hiccup.
What happened?
For some of the subjects, fMRIprep outputs this error:
Node: fmriprep_22_0_wf.single_subject_EXCI0153_wf.func_preproc_ses_1_task_rest_wf.initial_boldref_wf.get_dummy Working directory: /tmp/work/fmriprep_22_0_wf/single_subject_EXCI0153_wf/func_preproc_ses_1_task_rest_wf/initial_boldref_wf/get_dummy
Node inputs:
in_file = /data/sub-EXCI0153/ses-1/func/sub-EXCI0153_ses-1_task-rest_bold.nii.gz n_volumes = 40 nonnegative = True zero_dummy_masked = 20
Traceback (most recent call last): File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/plugins/legacymultiproc.py", line 67, in run_node result["result"] = node.run(updatehash=updatehash) File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 524, in run result = self._run_interface(execute=True) File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 642, in _run_interface return self._run_command(execute) File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command raise NodeExecutionError( nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node get_dummy.
Traceback (most recent call last): File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 398, in run runtime = self._run_interface(runtime) File "/opt/conda/lib/python3.9/site-packages/niworkflows/interfaces/bold.py", line 84, in _run_interface data = img.get_fdata(dtype="float32")[..., :self.inputs.n_volumes] File "/opt/conda/lib/python3.9/site-packages/nibabel/dataobj_images.py", line 355, in get_fdata data = np.asanyarray(self._dataobj, dtype=dtype) File "/opt/conda/lib/python3.9/site-packages/nibabel/arrayproxy.py", line 370, in array arr = self._get_scaled(dtype=dtype, slicer=()) File "/opt/conda/lib/python3.9/site-packages/nibabel/arrayproxy.py", line 337, in _get_scaled scaled = apply_read_scaling(self._get_unscaled(slicer=slicer), scl_slope, scl_inter) File "/opt/conda/lib/python3.9/site-packages/nibabel/arrayproxy.py", line 311, in _get_unscaled return array_from_file(self._shape, File "/opt/conda/lib/python3.9/site-packages/nibabel/volumeutils.py", line 468, in array_from_file raise IOError(f"Expected {n_bytes} bytes, got {n_read} bytes from " OSError: Expected 182241280 bytes, got 35153493 bytes from object
What command did you use?
What version of fMRIPrep are you running?
22.0.0
How are you running fMRIPrep?
Docker
Is your data BIDS valid?
Yes
Are you reusing any previously computed results?
No
Please copy and paste any relevant log output.
Additional information / screenshots
No response