mli0603 / stereo-transformer

Revisiting Stereo Depth Estimation From a Sequence-to-Sequence Perspective with Transformers. (ICCV 2021 Oral)
Apache License 2.0
659 stars 107 forks source link

How much cuda memory is needed to train? #16

Closed Leviosaaaa closed 3 years ago

Leviosaaaa commented 3 years ago

Hi! Thank you for sharing such great work! My question is how much cuda memory is needed for training. I read that you used a single Titan RTX GPU to train in your paper; does that mean this network needs 24G GPU to train with a batch size of 1? Thanks~

mli0603 commented 3 years ago

Hi @Leviosaaaa, sorry I must have missed this!

No of course not! Based on my nvidia-smi output the requirement is 8GB if you train on Sceneflow with the default setting. If you run into memory constraint situation, there are couple of ways to do it:

Let me know if you still have questions ;)

Leviosaaaa commented 3 years ago

Thank you!