Open jiapei100 opened 3 years ago
@jiapei100 The training consumes 11.x GB per GPU and needs 4 GPUs My machine is 12GB TITANX whereas 2080ti has 11 GB. You can reduce batch size to 8.
@JiaRenChang
Thank you very much for your prompt reply ... I have ONLY 1 GPU (20180Ti). Does that mean: no matter how I configure those arguments/parameters, there is NO way for me to test training PSMNet ???
Cheers
@jiapei100 You can use gradient checkpointing. https://pytorch.org/docs/stable/checkpoint.html
Won't 2080 Ti be able to finetune the pre-trained model?
Cheers Pei