nipreps / mriqc

Automated Quality Control and visual reports for Quality Assessment of structural (T1w, T2w) and functional MRI of the brain
http://mriqc.readthedocs.io
Apache License 2.0
299 stars 132 forks source link

NumberOfShells node fails for DWIs with only one volume in a shell #1157

Closed maxhenneke closed 7 months ago

maxhenneke commented 1 year ago

What happened?

When running MRIQC via Docker, it crashed raising the following exception "nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node shells." Based on the crash log, this may be happening because for Node: mriqc_wf.dwiMRIQC.shells "in_bvals = undefined" is a Node input. However, there are .bval files corresponding to all diffusion images and the dataset passes BIDS validation. The image that is causing this error is a B0 reference image, so its .bval file is just "0 0 0 0." When I moved the B0 reference image out of the BIDS directory and re-ran mriqc, however, I got the same error for my B1000 image.

What command did you use?

docker run -it --rm -v /media/mridata/rdoc/Nifti:/data:ro -v /media/mridata/rdoc/mriqc:/out --user 1001:1001 nipreps/mriqc:23.1.0 /data /out participant --no-sub --participant_label 0003

What version of the software are you running?

23.1.0

How are you running this software?

Docker

Is your data BIDS valid?

Yes

Are you reusing any previously computed results?

No

Please copy and paste any relevant log output.

Output:
31117-16:10:23,530 cli IMPORTANT:

    Running MRIQC version 23.1.0:
      * BIDS dataset path: /data.
      * Output folder: /out.
      * Analysis levels: ['participant'].

231117-16:10:40,399 nipype.workflow WARNING:
         Storing result file without outputs
231117-16:10:40,400 nipype.workflow WARNING:
         [Node] Error on "mriqc_wf.dwiMRIQC.shells" (/tmp/work/mriqc_wf/dwiMRIQC/_in_file_..data..sub-0001..ses-01..dwi..sub-0001_ses-01_acq-PA_dwi.nii.gz/shells)
231117-16:10:41,550 nipype.workflow ERROR:
         Node shells.a1 failed to run on host 6f0b0b7e41ce.
231117-16:10:41,552 nipype.workflow ERROR:
         Saving crash info to /out/logs/crash-20231117-161041-UID1001-shells.a1-ed5f47d1-c380-43f0-be6e-ad152fd0153f.txt
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 60, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node shells.

Traceback:
        Traceback (most recent call last):
          File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 397, in run
            runtime = self._run_interface(runtime)
          File "/opt/conda/lib/python3.9/site-packages/mriqc/interfaces/diffusion.py", line 166, in _run_interface
            grid_search = GridSearchCV(
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 875, in fit
            self._run_search(evaluate_candidates)
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 1375, in _run_search
            evaluate_candidates(ParameterGrid(self.param_grid))
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 834, in evaluate_candidates
            for (cand_idx, parameters), (split_idx, (train, test)) in product(
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_split.py", line 333, in split
            raise ValueError(
        ValueError: Cannot have number of splits n_splits=5 greater than the number of samples: n_samples=0.

Traceback (most recent call last):
  File "/opt/conda/bin/mriqc", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.9/site-packages/mriqc/cli/run.py", line 168, in main
    mriqc_wf.run(**_plugin)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 184, in run
    self._clean_queue(jobid, graph, result=result)
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 256, in _clean_queue
    raise RuntimeError("".join(result["traceback"]))
RuntimeError: Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 60, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node shells.

Traceback:
        Traceback (most recent call last):
          File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 397, in run
            runtime = self._run_interface(runtime)
          File "/opt/conda/lib/python3.9/site-packages/mriqc/interfaces/diffusion.py", line 166, in _run_interface
            grid_search = GridSearchCV(
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 875, in fit
            self._run_search(evaluate_candidates)
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 1375, in _run_search
            evaluate_candidates(ParameterGrid(self.param_grid))
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 834, in evaluate_candidates
            for (cand_idx, parameters), (split_idx, (train, test)) in product(
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_split.py", line 333, in split
            raise ValueError(
        ValueError: Cannot have number of splits n_splits=5 greater than the number of samples: n_samples=0.

Crash Log:
Node: mriqc_wf.dwiMRIQC.shells
Working directory: /tmp/work/mriqc_wf/dwiMRIQC/_in_file_..data..sub-0003..ses-01..dwi..sub-0003_ses-01_acq-PA_dwi.nii.gz/shells

Node inputs:

b0_threshold = 50.0
in_bvals = <undefined>

Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/mriqc/engine/plugin.py", line 60, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 527, in run
    result = self._run_interface(execute=True)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 645, in _run_interface
    return self._run_command(execute)
  File "/opt/conda/lib/python3.9/site-packages/nipype/pipeline/engine/nodes.py", line 771, in _run_command
    raise NodeExecutionError(msg)
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node shells.

Traceback:
        Traceback (most recent call last):
          File "/opt/conda/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 397, in run
            runtime = self._run_interface(runtime)
          File "/opt/conda/lib/python3.9/site-packages/mriqc/interfaces/diffusion.py", line 166, in _run_interface
            grid_search = GridSearchCV(
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 875, in fit
            self._run_search(evaluate_candidates)
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 1375, in _run_search
            evaluate_candidates(ParameterGrid(self.param_grid))
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_search.py", line 834, in evaluate_candidates
            for (cand_idx, parameters), (split_idx, (train, test)) in product(
          File "/opt/conda/lib/python3.9/site-packages/sklearn/model_selection/_split.py", line 333, in split
            raise ValueError(
        ValueError: Cannot have number of splits n_splits=5 greater than the number of samples: n_samples=0.

Additional information / screenshots

No response

oesteban commented 8 months ago

Hi @maxhenneke, indeed the DWI workflow is for diffusion weighted images ... I hadn't thought of the b0 use case.

I believe we could offer some sort of EPI workflow for fMRI's SBRefs and dMRI's b=0 volumes.

Without that "EPI" mode, I don't think we can do anything other than crashing in a nicer way.

WDYT?

oesteban commented 8 months ago

Thinking about it, we could extend the anatomical workflow to cover these data types.

oesteban commented 7 months ago

Addressed by #1240. DWIs with 5 or less orientations are not considered DWI for MRIQC. Perhaps they may be supported in the future as EPI images.