PennLINC / qsiprep

Preprocessing of diffusion MRI
http://qsiprep.readthedocs.io
BSD 3-Clause "New" or "Revised" License
138 stars 55 forks source link

omp_nthreads not passed to merge_and_denoise_wf on one branch #720

Open psadil opened 6 months ago

psadil commented 6 months ago

Summary

The merge_and_denoise_wf has a few nodes that can benefit from multithreading, but it looks like there is one instance where omp_nthreads isn't passed.

https://github.com/PennLINC/qsiprep/blob/89ba85b3cbdba4c0afad875c29d63154c6aa1fc7/qsiprep/workflows/dwi/pre_hmc.py#L320-L335

This results in a few unnecessarily slow steps (e.g., raw_gqi using only 1 thread)

Additional details

$ apptainer run docker://pennbbl/qsiprep --version
INFO:    Using cached SIF image
qsiprep v0.20.1.dev0+geda84d5.d20240112

What were you trying to do?

Test a patch on qsiprep

What did you expect to happen?

I expected it to run a bit faster, and for the thread_counts in these two command.txt files to match

$ cat $SS_ID_WF/dwi_preproc_ses_RU_wf/pre_hmc_wf/dwi_qc_wf/raw_gqi/command.txt
dsi_studio --action=rec --method=4 --align_acpc=0 --check_btable=0 --dti_no_high_b=1 --source=/tmp/work/qsiprep_wf/single_subject_travel2_wf/dwi_preproc_ses_RU_wf/pre_hmc_wf/dwi_qc_wf/raw_gqi/sub-travel2_ses-RU_dwi_merged.src.gz --num_fiber=3 --thread_count=8 --other_output=all --record_odf=1 --r2_weighted=0 --param0=1.2500 --thread_count=8

$ cat $SS_ID_WF/dwi_preproc_ses_RU_wf/pre_hmc_wf/merge_and_denoise_wf/dwi_qc_wf/raw_gqi/command.txt 

dsi_studio --action=rec --method=4 --align_acpc=0 --check_btable=0 --dti_no_high_b=1 --source=/tmp/work/qsiprep_wf/single_subject_travel2_wf/dwi_preproc_ses_RU_wf/pre_hmc_wf/merge_and_denoise_wf/dwi_qc_wf/raw_gqi/sub-travel2_ses-RU_dwi_merged.src.gz --num_fiber=3 --thread_count=1 --other_output=all --record_odf=1 --r2_weighted=0 --param0=1.2500 --thread_count=1

What actually happened?

thread_count was set to 1 for the dwi_qc workflow as called from the merge_and_denoise workflow

Reproducing the bug

mattcieslak commented 5 months ago

good catch, thank you! Adding this fix in #725