Closed pcklink closed 4 years ago
Done! Went with Option 1. There's a Python script in NHP-BIDS/code/subcode
called bids_preproc_parallel_runs.py
that does this.
The old serial way of doing it also still works, but I'd recommend doing it the new way. The script basically does the following:
bids_preprocessing_workflow.py
and potentially also the bids_warp2nmt_workflow.py
for this single run (warping can be excluded, but I suggest you run it immediately). Few hours later >> RESULTS!
Currently motion correction is parallelized within a run, while all runs in a job are done in serial fashion. We could potentially speed things up tremendously if runs were parallelized. This is probably not possible within a single job due to memory/cpu restrictions but it will be possible if we create one job per run. [EDIT] Indeed
3dAllineate
already takes a lot of CPU. We prob. should not do this within-job.Note that we can in principle already do this by defining single-run csv files and starting jobs for each of them. However, it would be nice if it were possible with the same multi-run csv.
To establish this and not mess up the workflow too much we would need:
OPTION 1
OPTION 2
run
field from the csv