CoBrALab / RABIES

fMRI preprocessing pipeline and analysis tools adapted for rodent images. Visit the full documentation at https://rabies.readthedocs.io/en/stable/
Other
34 stars 14 forks source link

1 bad file out of 50 subjects during preprocessing #293

Closed geowk closed 1 year ago

geowk commented 1 year ago

Hello,

We recently preprocessed about 50 rat brains using rabies 0.4.8. The majority of it worked fine with the exception of 1 problematic subject at the end. We ran this with fast_commonspace=true option.

We received the following message in the log at the end of the rabies_preprocess.log file:

230623-14:46:29,981 nipype.workflow CRITICAL: RABIES failed: Workflow did not execute cleanly. Check log for details

Thus, when proceeding to confound correction, it didn't work due to the missing 'rabies_preprocess_workflow.pkl' as a result of the problematic subject during the preprocessing.

Error message when attempting confound_correction:

Traceback (most recent call last): File "/home/rabies/miniforge/bin/rabies", line 7, in <module> exec(compile(f.read(), __file__, 'exec')) File "/home/rabies/RABIES/scripts/rabies", line 3, in <module> execute_workflow() File "/home/rabies/RABIES/rabies/run_main.py", line 46, in execute_workflow workflow = confound_correction(opts, log) File "/home/rabies/RABIES/rabies/run_main.py", line 255, in confound_correction workflow = init_main_confound_correction_wf(preprocess_opts, opts) File "/home/rabies/RABIES/rabies/confound_correction_pkg/main_wf.py", line 20, in init_main_confound_correction_wf split_dict, split_name, target_list = read_preproc_workflow(preproc_output, nativespace=cr_opts.nativespace_analysis) File "/home/rabies/RABIES/rabies/confound_correction_pkg/main_wf.py", line 290, in read_preproc_workflow node_dict = get_workflow_dict(preproc_workflow_file) File "/home/rabies/RABIES/rabies/utils.py", line 378, in get_workflow_dict with open(workflow_file, 'rb') as handle: FileNotFoundError: [Errno 2] No such file or directory: '/preprocess_outputs/rabies_preprocess_workflow.pkl'

Is there way to resolve this issue without having to re-preprocess the remaining 49 brains again in order to run confound_correction and then analysis with just the already preprocessed 49 subjects, and excluding the 1 bad subject?

Thanks.

g

Gab-D-G commented 1 year ago

Hi, to understand what happened during preprocessing, we need to take a look at the specific error message. Can you share the log file?

geowk commented 1 year ago

Hello,

Thank you for taking the time to look into it. Please see the attached file.

I suspect it isn't the rabies preprocessing stream, but the 1 dataset itself.

sub-07_ses-03_acq-grpA02

[Node] Error on "main_wf.bold_main_wf.bold_hmc_wf.ants_MC" (/preprocess_outputs/main_wf/bold_main_wf/bold_hmc_wf/_scan_info_subject_id07.session03_split_name_sub-07_ses-03_desc-o_T2w/_run_None/ants_MC) raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'antsMotionCorr -d 3 -o [ants_mc_tmp/motcorr,ants_mc_tmp/motcorr.nii.gz,ants_mc_tmp/motcorr_avg.nii.gz] -m MI[ /preprocess_outputs/main_wf/bold_main_wf/gen_bold_ref/_scan_info_subject_id07.session03_split_name_sub-07_ses-03_desc-o_T2w/_run_None/gen_ref/sub-07_ses-03_task-rest_acq-grpA02_desc-oa_bold_despike_bold_ref.nii.gz , /preprocess_outputs/main_wf/bold_main_wf/_scan_info_subject_id07.session03_split_name_sub-07_ses-03_desc-o_T2w/_run_None/despike/sub-07_ses-03_task-rest_acq-grpA02_desc-oa_bold_despike.nii.gz , 1 , 20 , regular, 0.2 ] -t Rigid[ 0.25 ] -i 50x20 -s 1x0 -f 2x1 -u 1 -e 1 -l 1 -n 10 -v 0' returned non-zero exit status 1.

Looking at the unprocessed nifti file, there was a ton of movement, etc. going on during the scan. We had excluded the dataset, but neglected to remove during processing with rabies. In retrospect, we should have removed it prior to preprocessing.

However, looking at the remain 49 bold subjects' and their multiple session data in commonspace, they look good.

We think that is the main reason for the lack of a rabies_preprocess_workflow.pkl file. Could we use --read_datasink option for confound_correction phase without rabies_preprocess_workflow.pkl file?

Thanks.

g

rabies_preprocess.log

Gab-D-G commented 1 year ago

Hi, yes this is likely due to excessive motion. There was likely some timepoint where the brain was out of the frame, which created that registration error.

Using --read_datasink should work in this case to mitigate the issue. Otherwise, since you are using fast_commonspace, it should be possible to remove the subject from the input directly and re-run preprocessing without having to re-computed the preprocessing steps. RABIES should be able to read the old outputs and avoid re-computation. I'd say --read_datasink is a safer bet though.

Gabe

geowk commented 1 year ago

Thanks, Gabe. Despite not having the rabies_preprocess_workflow.pkl file, using --read_datasink during the confound_correction part of the analysis worked to completion.

Thanks again.

-g