Closed TyroneLi closed 4 years ago
Thanks for your interest in our work. Yes, our codebase assumes that there is only one gpu. Unfortunately, we do not have a plan for extending our code for multi-gpu training.
However, you can modify our code following the PyTorch distributed guideline: https://pytorch.org/tutorials/intermediate/dist_tuto.html
Thanks for your work! I would try to apply distributed training. Besides, could u provide more details about training vgg/resnet/googlenet on different datasets, eg learning rate, and so on? Cause I cann't find more details about these hype-perams in config.py. Thanks.
You can find all configurations at here
As mention above. How can I modify related code to make pytorch multi gpu training and testing because I found I can only perform on single gpu using your codes and it's really slow.Thanks