fangchangma / self-supervised-depth-completion

ICRA 2019 "Self-supervised Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera"
MIT License
623 stars 135 forks source link

Use your pretrained model: GPU run out of memory. 8.95 gb already allocated #46

Closed oconnor127 closed 3 years ago

oconnor127 commented 3 years ago

Hey, I just want to use your pretrained model and create some results (on val_selection_cropped). For some strange reason I get the following error:

RuntimeError: CUDA out of memory. Tried to allocate 210.00 MiB (GPU 0; 10.91 GiB total capacity; 8.95 GiB already allocated; 194.06 MiB free; 9.59 GiB reserved in total by PyTorch)

How can I avoid this? When I dont run your code the GPU usage is low (approx 500 Mb). Looks strange that a 11 Gb GPU is not enough for your code.

I use the following command:

python3 main.py --evaluate /home/username/Downloads/model_best.pth.tar --val select

I have nothing changed in any file.