Closed adpartin closed 1 week ago
@adpartin We will also need to update the Readme to include how to use input_supp_dir parameter. It may also be useful to mention about the example config files within the Readme . Or should we have another Readme inside /example_params_file to point to the corresponding model repos? WDYT
@adpartin We will also need to update the Readme to include how to use input_supp_dir parameter. It may also be useful to mention about the example config files within the Readme . Or should we have another Readme inside /example_params_file to point to the corresponding model repos? WDYT
I'll update the README and mention example_params_files.
@priyanka9991 do you see that the changes actually have been merged to develop
?
@adpartin I cannot see these changes in develop. This branch has not been merged to develop yet. I am not sure of adding 'available_accelerators' to workflow_preprocess.py.
@adpartin I cannot see these changes in develop. This branch has not been merged to develop yet. I am not sure of adding 'available_accelerators' to workflow_preprocess.py.
I tested six models with Exp 1a, and all of them ran successfully, producing the expected results with this setting. It seems that models that don’t require GPUs during preprocessing simply don’t utilize them, even when available_accelerators are passed, which only takes effect when a model actually uses GPUs. Given this behavior, I think it’s safe to pass available_accelerators by default. What do you think?
Alright. So if passing available_accelerators does not limit the number of preprocessing processes launched or assign jobs to GPUs without utilizing them, then it should be alright.
It tested it with deepcdr, graphdrp, hidra, pathdsp, tcnns, and uno and exp1a. The only weird thing is the very long training time with hidra (single split with gcsi more than 2.5 hours).