Open NehaNishikant opened 2 years ago
Hi, does train_mhop.py support distributed training?
I noticed a call to torch.distributed.init_process_group, but n_gpu is hardcoded to 1.
Hi, does train_mhop.py support distributed training?
I noticed a call to torch.distributed.init_process_group, but n_gpu is hardcoded to 1.