Open alsbhn opened 2 years ago
The easiest solution would be to go with nn.DataParallel
, although it might not be the most efficient way. Definitely, this will help for training with larger batches. I will have a try and integrate this feature if possible in the near future.
Is there a way to train GPL model with multi gpu? If yes can that help for training with larger batches?