airsplay / lxmert

PyTorch code for EMNLP 2019 paper "LXMERT: Learning Cross-Modality Encoder Representations from Transformers".
MIT License
923 stars 157 forks source link

Fine-tuning VQA on multiple gpus #111

Open simplelifetime opened 2 years ago

simplelifetime commented 2 years ago

I'm trying to reproduce some results with a downloaded pre-train model.But When I set GPU_ID to 0,1,2,3,the program seems not to run on multiple gpus as I expected.I wonder how to properly fine-tune the pretrained model on multiple gpus.