jhgan00 / image-retrieval-transformers

(Unofficial) PyTorch implementation of Training Vision Transformers for Image Retrieval(El-Nouby, Alaaeldin, et al. 2021).
44 stars 6 forks source link

training on multiple GPUs #4

Open enes3774 opened 1 year ago

enes3774 commented 1 year ago

firstly thanks for such an amazing and clear implementation of the paper. I want to train the model on 8 GPUs with 24GB. Should ı add only torch.nn.DataParallel or should ı change other codes as well? Thanks in advance

jhgan00 commented 1 year ago

"I apologize for the delayed response. Currently, I am unable to test the code in a multi-GPU environment. Therefore, it is difficult for me to provide a definite answer, but it seems that DataParallel should be able to work. Thank you."