Open bakerada opened 3 years ago
Hi, Sorry for the delayed response. I used 8 24G GPUs (Quadro RTX 6000) to train the model. If you don't have large memory GPUs, decreasing the batchsize and learning rate and increasing the total iterations accordingly should also be fine.
Thanks for the great work Xingyizhou
It could be great if you include on how to setup worker for distributed learning!
📚 Documentation
I have seen it noted the GPUs used for inference. I was hoping you could share what GPU setup you used to train the models, particularly the R2-101-DCN-BiFPN variants.