Open WilliamLwj opened 4 years ago
Hi - the code is only for single GPU training as it is. The easiest way to modify it for multi-GPU is to wrap the model in torch.nn.DataParallel - you can take a look at the pytorch docs here. A faster implementation would be DistributedDataParallel - you can read more about it here. I used this for training a generative model on multiple GPUs and you can look at that implementation here. Hope this helps.
Hi, I am trying to train the model for multiple epochs on two GPUs. Is there a way for me to specify multiple "--cuda" values so that I can use multiple GPUs?