Hi, dear. Thank you for your sharing. According to the code you gave when I used multi-GPU training, it is equivalent to torch.nn. data_parallel. If I want to achieve distributed training to achieve torch.distributed effect. What changes do I need to make. @skurzhanskyi @komelianchuk
Hi, dear. Thank you for your sharing. According to the code you gave when I used multi-GPU training, it is equivalent to torch.nn. data_parallel. If I want to achieve distributed training to achieve torch.distributed effect. What changes do I need to make. @skurzhanskyi @komelianchuk