Closed qianlanwyd closed 2 years ago
Hi, at this time we don’t support multi-gpu training. I found that single GPU is sufficient for the experiment in the project.
What gpu did you use?
In fact, I can’t run experiments with single 2080ti. What is the minimum gpu memory for all experiments?
We use A100-40GB GPUs for all experiments. I also recommend reading our paper, especially Appendix for more implementation details if you have further questions. I recommend reduce batchsize if run into cuda-out-of-memory issue, or implement distributed training.
If I set NUM_GPUS =2, there are following mistakes. Could you please tell me how to use multiple GPUs?