Open BerglinJ opened 5 months ago
same error with a 3080 12 Gb. I got it to work by giving it only one image, setting num_images_per_prompt=1
Thanks, got it to work too with 1 image per prompt
yeah, actually I do not think the number of input images makes much difference regarding memory usage, it seems that it is mostly num_images_per_prompt that makes any difference
Interesting, on a RTX 2080ti I can't even load the model in the jupyter notebook. Any idea why it runs for inference on a 3060 with 12GB while it can't even be loaded on the 2080ti with 11GB?
Interesting, on a RTX 2080ti I can't even load the model in the jupyter notebook. Any idea why it runs for inference on a 3060 with 12GB while it can't even be loaded on the 2080ti with 11GB?
when I monitor GPU memory usage, it is constantly over 10/11 GB and inference sometimes breaks if I am running something else aside. So I guess 11 GB is just under the limit (sorry)
i tried setting img =1 for output but still i'm getting the OOM error , on colab . it fails after the processing from constant 9-10gb to error of allocatinhg 4.5gb ? why and how can i silence this error any idea ?
I get CUDA out of memory messages on my RTX 3060 12gb.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.76 GiB total capacity; 8.63 GiB already allocated; 1.37 GiB free; 9.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Is it possible run it on 12 Gb VRAM?