Closed bdeck8317 closed 3 years ago
that is too long , what the number of volume of the task?
Also this task_confound[1]='36p'
will increase the time during feat.
@a3sha2 , thanks for getting back to me.
We have 960 volumes. We would like to keep the 36 parameter motion correction.
Any other suggestions to speed up processing?
@a3sha2 @mattcieslak
Do either of you have suggestions for decreasing the processing speed here?
I.e. suggestions for what to set the cpu and memory for the container at?
@a3sha2 , is there any way to speed up processing without removing the 36p?
How long does this normally take?
it depend on the data size. The average 5 min bold with 3mm resolution will require less than 10GB RAM and under 10 minutes. let me see the design file you are using!!!
Hey @a3sha2 ,
Design file is the one above. We have 960 volumes with 36 p motion correction. Our bold scans are about 5 min with a pre/post design. So for each subject we have 2 images one for pre and one for post. Based on our specifications of our docker containers it was taking about 3 hours per image. This means it's 6 hours (3 hours each image). Feat then runs again based on our design file. So a total of 12 hours to run one subject.
This seems very long.
Would you have any recommendations for optimizing our cpu to ram ratio. Such as 8 cpu to 16 GB ram?
@a3sha2
Would you have any recommendations for optimizing our cpu to ram ratio. Such as 8 cpu to 16 GB ram?
Thanks!
Describe the bug I am attempting to perform task regression on task data via XCP and re-running feat during processing. As it stands, XCP takes approximately 4 hours to run a single session for one subject. Most of that time is spent running feat. Is there any way to optimize the docker container to increase processing speed?
Cohort file
Design File Paste your entire design (
.dsn
) file between the triple backticksRuntime Information
Additional context