The flag --multiprocessing-distributed activates multi-gpu training on all available gpus on your node. You can refer to this section of the README file for other examples.
Please also adapt the number of workers for dataloading in the config file. This parameter defines the total number of workers across all gpus: if you have 4 available GPUs, then num_workers: 12 means each gpu will be assigned 3 workers for dataloading.
Hi @mooncakehub,
Yes, you can train this backbone using multiple gpus. I have been training models using from 1 to 8 gpus with this code.
For example, on nuscenes, I am using:
The flag
--multiprocessing-distributed
activates multi-gpu training on all available gpus on your node. You can refer to this section of the README file for other examples.Please also adapt the number of workers for dataloading in the config file. This parameter defines the total number of workers across all gpus: if you have 4 available GPUs, then
num_workers: 12
means each gpu will be assigned 3 workers for dataloading.Hope this helps!