Closed ksh11023 closed 2 years ago
Yup that's correct. I used the V100 GPU (32G), so I was able to fit those batch sizes in memory. You can always decrease the batch size if you need to fit with less GPU memory.
Thank you for your reply!!
So, do I have to use batch size of 8 for Kitti benchmark, to get the results on the paper? I am having trouble getting the same(similar) results using batch size of 4 on Kitti benchmark.
The results on the paper are all reported with a batch size of 4, so this should be fine for any of your experiments. What results are you getting for your models?
When training on Kitti train set(3712) and validating on Val set(3769)
In the paper it says: Car APr_40 (IOU=0.7) easy: 23.57 mod: 16.31 hard: 13.84
And my results are : Car APr_40(IOU=0.7) easy: 21.92 mod: 15.04 hard:12.68
I used batch size 4, trained from the scratch with other settings same as the paper.
That level of variance is expected between runs for CaDDN. Please see here for a more in depth discussion about the reproducibility. If you would like to achieve better results, I recommend performing multiple runs and select the one with the best results.
Cody thank you for the swift reply!
I'll follow your advice. Thanks again for sharing your great work:)
hi, Thank you for sharing your work.
when training on Kitti benchmark, do you mean by batch size of 4 per GPU? which equals to batch size of 8 when using two GPUs? Am I getting this right..?