Closed Royzon closed 5 years ago
@Roython Use dataParalle , put codes in Learner.py ---> train() like this:
def train(self,conf,epochs):
self.model.train()
conf.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.model = nn.DataParallel(self.model,device_ids=[0,1,2,3])
self.model.to(conf.device)
running_loss = 0.
Thank you very much, it works.
I want to train with multiple GPUs, where should I change?