TRI-ML / packnet-sfm

TRI-ML Monocular Depth Estimation Repository
https://tri-ml.github.io/packnet-sfm/
MIT License
1.24k stars 243 forks source link

How much GPU memory needed to produce the reported performance? #112

Closed ruili3 closed 3 years ago

ruili3 commented 3 years ago

Hi, thanks a lot for the work! I have a question for the GPU memory of PackNet-sfm. In the paper, the batch size is set to 4, totally 8 V100 GPU is used for parallel tranining. I wonder how much GPU resources is needed to run the settings reported in the paper? (Full model with 192x640 input size)?

And one more question, batch size ==4 means there are 4 image on every GPU and the total batch size is 32 (4x 8 GPU) ?

Thank you very much!

VitorGuizilini-TRI commented 3 years ago

Thank you for the interest in our work! Each GPU has 16 GB memory, and the network takes up a good percentage of that, so you will need 8 x 16 GB GPUs. Reducing the batch size to 2 shouldn't hurt the performance significantly, so you can probably get away with half of that and train longer.

Yes, it's a batch size of 4 per GPU, so the total is 32.

ruili3 commented 3 years ago

Got that, thank you a lot for the reply!

VitorGuizilini-TRI notifications@github.com 于2021年1月22日周五 上午2:30写道:

Thank you for the interest in our work! Each GPU has 16 GB memory, and the network takes up a good percentage of that, so you will need 8 x 16 GB GPUs. Reducing the batch size to 2 shouldn't hurt the performance significantly, so you can probably get away with half of that and train longer.

Yes, it's a batch size of 4 per GPU, so the total is 32.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/TRI-ML/packnet-sfm/issues/112#issuecomment-764849757, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL646YXHT2DK7U4V5YW2N3S3BXEJANCNFSM4WMN6CTA .