KaiqiangXiong / CL-MVSNet

[ICCV2023] CL-MVSNet: Unsupervised Multi-view Stereo with Dual-level Contrastive Learning
MIT License
39 stars 3 forks source link

too much memory #2

Closed zhz120 closed 9 months ago

zhz120 commented 9 months ago

Thank you for your excellent work, but your method requires too much gpu memory. Have you tested whether training with 96g of gpu memory can achieve a similar reconstruction effect?

KaiqiangXiong commented 9 months ago

How much GPU memory does it take during your training? The method(nearly 12GB) consumes less memory than RC-MVSNet(14.5GB) and it can also be trained with one single 16GB V100. Why do you need 96GB GPU memory? Sorry, I am confused about this.

zhz120 commented 9 months ago

Sorry, my expression may not be clear. What I mean is that you described in the paper that 8 v100s are needed to complete the training process. If only 4 or even 2 v100s are used for training, will the results be significantly different?

KaiqiangXiong commented 9 months ago

I have merely trained this network on the setting of 8 GPUS, which is much faster. However, under common circumstances, the number of GPUs will not influence the performance of deep learning methods obviously.

zhz120 commented 8 months ago

Thanks for your answer, it helped me