ToughStoneX / Self-Supervised-MVS

Pytorch codes for "Self-supervised Multi-view Stereo via Effective Co-Segmentation and Data-Augmentation"
152 stars 15 forks source link

Out of menary when trying to train the model on RTX3070 #15

Closed CZH-cpu closed 2 years ago

CZH-cpu commented 2 years ago

Thanks for your excellent work!

After reading your papers, I am very interested in trying this out. I have an RTX3070 video card which has a total of 8GB of video memory. I set the batch_size to 1, but it still prompts insufficient memory during training.

_RuntimeError: CUDA out of memory. Tried to allocate 480.00 MiB (GPU 0; 7.79 GiB total capacity; 5.06 GiB already allocated; 155.00 MiB free; 5.41 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOCCONF

I don't have the funds to afford a more advanced card right now, is there any other way to reduce the amount of video memory needed when training the network? Or can you please share your trained xxx.ckpt file to me directly? My email address is 347330274@qq.com.

Anyway, thank you for your excellent work and I wish you good health and success again!

ToughStoneX commented 2 years ago

The links are sent to you via e-mail. Please check your e-mail box.