mynlp / cst_captioning

PyTorch Implementation of Consensus-based Sequence Training for Video Captioning
60 stars 17 forks source link

Multi-GPU training support #3

Open chihyaoma opened 6 years ago

chihyaoma commented 6 years ago

Hi,

I am trying to use multiple GPUs on my workstation for your code. I thus use GID=0,1,2,3 in the command to start a training session. However, it seems that it's still using only 1 GPU.

Going through your code, I was unable to find DataParallel anywhere in the code. I am wondering whether if your code originally supports multi-GPU training.

If not, I might be able to take a look at.

plsang commented 6 years ago

Yep, multi-GPU is not supported at the moment.