NVlabs / ODISE

Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]
https://arxiv.org/abs/2303.04803
Other
845 stars 45 forks source link

Minimum GPU requirements #29

Open gusanmaz opened 1 year ago

gusanmaz commented 1 year ago

I get CUDA out of memory error when I run python demo/demo.py --input demo/examples/coco.jpg --output demo/coco_pred.jpg --vocab "black pickup truck, pickup truck; blue sky, sky" on RTX 3060 GPU with 12GB of vram.

Last lines of the error is as follows:

output_features[k] = torch.zeros(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 176.00 MiB (GPU 0; 11.73 GiB total capacity; 8.91 GiB already allocated; 136.75 MiB free; 9.09 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

What are the minimum requirements for running inference code? Is there a way to prevent getting these errors on less powerful systems? Is it possible to perform inference using CPU?

Thanks!

tejassp2002 commented 1 year ago

Hey @gusanmaz, did you get the information on the GPU requirements? Thanks.

cipri-tom commented 1 year ago

yeah, they say you need at least 13 Gb of VRAM. Here's an excerpt from running ODISE on T4 GPU . 13.2 Gb

Screenshot 2023-08-07 at 10 00 59

tejassp2002 commented 1 year ago

Hi! I can see that the Colab offers 15 GB of GPU RAM. But still whenever I run the colab code the instance crashes. Any workarounds on this?

cipri-tom commented 1 year ago

@tejassp2002 Probably because during loading the model reaches more than 12 Gb of RAM for a brief moment of time (Colab has 12 Gb), and Google provides no SWAP to accommodate this. This is system RAM, not GPU RAM

Screenshot 2023-08-07 at 14 35 32