Closed jacob1017 closed 2 years ago
sometimes our experiments need multi-gpu with specified hyper-parameters.
@jacob1017
general:
parallel_search: True
parallel_fully_train: True
devices_per_trainer: 2 # number of GPUs with specified hp
Pipeline consist of multi-step. Search_pipe_step search multi-optimize configuration (hpo/network). Currently our search algorithm defined Generator to sample some configuration restricted by searchspace. Then trainer will evaluate those configuration and feedback to Search Algorithm. My question is how can we setting trainer to support multi-gpu?