kamenbliznashki / chexpert

CheXpert competition models -- attention augmented convolutions on DenseNet, ResNet; EfficientNet
MIT License
52 stars 20 forks source link

Multiple GPU training #6

Open WilliamLwj opened 4 years ago

WilliamLwj commented 4 years ago

Hi, I am trying to train the model for multiple epochs on two GPUs. Is there a way for me to specify multiple "--cuda" values so that I can use multiple GPUs?

kamenbliznashki commented 3 years ago

Hi - the code is only for single GPU training as it is. The easiest way to modify it for multi-GPU is to wrap the model in torch.nn.DataParallel - you can take a look at the pytorch docs here. A faster implementation would be DistributedDataParallel - you can read more about it here. I used this for training a generative model on multiple GPUs and you can look at that implementation here. Hope this helps.