Open TSugiura0000 opened 5 months ago
The issue I believe is that you are attempting to provide what was generated by the preproc
-level analysis explicitly as the input dataset to the participant
-level analysis. This is contrary to the BIDS Apps specification, which says that the first positional argument is the BIDS Raw dataset and the second is the derivatives location; this is not the same as "what is the input to processing" vs. "what is output from processing" in cases where an App doesn't take exclusively as input raw data. The mounted "input" and "output" directories should be exactly the same regardless of whether you are running preproc
or participant
level. When running the participant
level analysis, it is looking inside of the purported output directory trying to find the output of the preproc-level analysis. Admittedly the error message isn't ideal, it should be reporting that it can't find the root level preproc
output directory rather than an inability to find specific DWI files for a specific participant, but I think that's the issue.
(While I use the "muh BIDS App spec" excuse here, technically the BIDS App spec says that the analysis level should be either "participant
" or "group
", so I'm actually in violation of that spec in a different way; but I never claimed to believe that that specification was ideal..)
Preprocessed the HCP-YA dataset for a single subject (sub-100307) at the "preproc" level.
Are you taking the "minimally preprocessed" HCP data and then running it through the preproc
analysis level? Because that would be highly inappropriate. Many processing steps performed by the preproc
analysis level have already been applied in creation of those data, and others are mathematically inappropriate to be applying to such data.
The T1-weighted image for subject 100307 is not skull-stripped, as evidenced by the attached image (see below) where the T1w image (sub-100307_desc-preproc_T1w.nii.gz in grayscale) is overlaid with the DWI image (sub-100307_desc-preproc_dwi.nii.gz in red). This suggests that the T1w image still contains the skull.
There's a chance this is another slightly misleading error message. If the "SkullStripped
" metadata field is not specified, then a heuristic is invoked where it calculates the fraction of T1w voxels that contain the value 0; if it is sufficiently high, then it infers that an explicit zero-filling step has been previously applied. But I suppose it is erroneous to assume that such a zero-filling process has zeroed out everything except the brain; maybe in these data they have zeroed out outside of the skull, in which case the inference "presence of zero-filling -> skull-stripped" is erroneous. You could likely get around this without code modification by specifying "SkullStripped": false
in the T1w JSON.
@Lestropie
Thank you for your previous guidance. Following your suggestions, I have made the following adjustments and retried running the MRtrix3 Connectome pipeline:
Actions Taken:
Directory Specification Adjustment:
I specified the bids_dir
and output_dir
to be the same as in the "preproc" level processing.
Modification of the T1w JSON File:
I modified the SkullStripped
field in the T1w JSON file generated from the "preproc" level processing to false
.
Use of HCP-YA Data:
The HCP-YA data I am using is from the Unprocessed
dataset, not the minimally processed
dataset.
Results:
recon-all
processing, an error occurred stating that the directory /output/mrtrix3_connectome.py-tmp-VWFDWZ
does not exist, causing the process to enter an infinite loop. On the host side, the /output
directory has a tmp
directory, but its name does not match VWFDWZ
.Questions:
Directory Path Specification: Could there be an issue with how the directory paths are specified, as you suggested?
recon-all Temporary Directory Issue: Is there a potential bug where the automatically generated tmp directory before the recon-all process and the tmp directory name passed to recon-all do not match?
I would appreciate any advice on how to resolve this issue.
Thank you for your assistance.
I've no idea how well this tool will go executing on the HCP-YA unprocessed data. Some years back I had a student attempting to replicate the HCP-YA preprocessing and was unable to replicate data of the same quality as their provided minimally processed data, even though we were using their containerised versions of software. Gradient non-linearity geometric distortions are also likely to cause much more problems on the Connectom scanner, but that correction is currently absent from this pipeline.
For the recon-all
absent directory issue, I'd need to see the full log. The existing scratch directory of a different name is most likely vestigial from a prior attempted execution. In the absence of more detailed information I'm unsure why the pipeline would be giving a bad scratch path to freesurfer.
Thank you for your assistance.
After removing the scratch directory that appeared to be left over from previous runs, the issue with FreeSurfer not finding the scratch directory was resolved. Thank you for your advice.
Should I avoid using this pipeline on raw HCP data?
I would suggest using the minimally processed HCP data unless there's a strong justification otherwise. While there's the potential for improvements in data quality compared to the original processing (eg. with PCA denoising, within-volume motion correction), in the absence of a more robust study comparing them there's no guarantee that re-processed data won't be net poorer quality. Like I mentioned, we tried this at one point and were never content with the results we got, though we didn't do an exhaustive evaluation of preprocessing parameters. There was a promise of an HCP data release with an alternative preprocessing pipeline, with an emphasis on improved susceptibility field estimation, but I've not been able to find anything about a data release: https://cds.ismrm.org/protected/22MProceedings/PDFfiles/0425.html.
I understand now why it is better to use the minimally preprocessed data.
I want to create a structural connectivity matrix using the HCP MMP1.0 atlas, but I am unsure how to do this from the minimally preprocessed HCP data. I have been looking for a method but haven't found one yet. Therefore, I am attempting to use this pipeline to process the raw data. Any advice you can provide would be greatly appreciated.
You could potentially trick this tool into utilising the minimally preprocessed data:
You could look at this repo created by a colleague and see if it is of use.
Thank you for your response.
Regarding your suggestion to convert the "minimally preprocessed" data into a BIDS derivative format and label it as "MRtrix3_connectome-preproc," then execute it at the "participant" level, would this ensure correct processing?
I intend to use the HCP MMP1.0 atlas, and I have observed that when using this atlas, FreeSurfer's "recon-all" is executed within the "participant" level processing. As far as I understand, "recon-all" is already applied within the HCP pipeline to the "minimally preprocessed" data. Is it acceptable for this process to be applied again?
Hi @Lestropie
Description: I encountered an error during the participant-level processing step using the MRtrix3 Connectome pipeline. The error suggests that the pipeline is unable to import the requisite pre-processed data. Below are the details of my setup and the specific error messages encountered.
Steps to Reproduce:
Environment:
Server: Docker Engine - Community Engine: Version: 26.1.3 API version: 1.45 (minimum version 1.24) Go version: go1.21.10 Git commit: 8e96db1 Built: Thu May 16 08:33:48 2024 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.32 GitCommit: 8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 runc: Version: 1.1.12 GitCommit: v1.1.12-0-g51d5e94 docker-init: Version: 0.19.0 GitCommit: de40ad0
OS: Debian 11 bullseye Kernel: x86_64 Linux 5.10.0-28-amd64
CPU: 13th Gen Intel Core i9-13900K @ 32x 7GHz GPU: NVIDIA GeForce RTX 4070 RAM: 31 GB
python mrtrix3_connectome.py /projectroot/data/bids/dir /projectroot/data/preproc/output/dir preproc -participant_label 100307
/bids_dataset/sub-100307/ ├── anat/ │ ├── sub-100307_desc-brain_mask.nii.gz │ ├── sub-100307_desc-preproc_T1w.json │ └── sub-100307_desc-preproc_T1w.nii.gz ├── dwi/ ├── eddyqc/ │ ├── eddy_outlier_map │ ├── eddy_outlier_n_sqr_stdev_map │ └── eddy_outlier_n_stdev_map ├── sub-100307_desc-brain_mask.nii.gz ├── sub-100307_desc-preproc_dwi.bval ├── sub-100307_desc-preproc_dwi.bvec ├── sub-100307_desc-preproc_dwi.json └── sub-100307_desc-preproc_dwi.nii.gz
mrtrix3_connectome.py: ... [DEBUG] run.command() ... mrconvert /bids_dataset/sub-100307/dwi/sub-100307_desc-preproc_dwi.nii.gz /output/mrtrix3_connectome.py-tmp-SM01FL/dwi.mif ... mrconvert: [INFO] opening image "/bids_dataset/sub-100307/dwi/sub-100307_desc-preproc_dwi.nii.gz"... ... mrtrix3_connectome.py: [ERROR] Unable to import requisite pre-processed data from either specified input directory or MRtrix3_connectome output directory Error when attempting load from "/bids_dataset": Cannot execute FreeSurfer for obtaining parcellation: input T1-weighted image is already skull-stripped Error when attempting load from "/output/MRtrix3_connectome-preproc": No DWIs found for session "sub-100307"