TencentARC / PhotoMaker

PhotoMaker
https://photo-maker.github.io/
Other
8.63k stars 676 forks source link

CUDA out of memory #47

Open BerglinJ opened 5 months ago

BerglinJ commented 5 months ago

I get CUDA out of memory messages on my RTX 3060 12gb.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 11.76 GiB total capacity; 8.63 GiB already allocated; 1.37 GiB free; 9.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Is it possible run it on 12 Gb VRAM?

Xavier31 commented 5 months ago

same error with a 3080 12 Gb. I got it to work by giving it only one image, setting num_images_per_prompt=1

BerglinJ commented 5 months ago

Thanks, got it to work too with 1 image per prompt

Xavier31 commented 5 months ago

yeah, actually I do not think the number of input images makes much difference regarding memory usage, it seems that it is mostly num_images_per_prompt that makes any difference

the-klingspor commented 5 months ago

Interesting, on a RTX 2080ti I can't even load the model in the jupyter notebook. Any idea why it runs for inference on a 3060 with 12GB while it can't even be loaded on the 2080ti with 11GB?

Xavier31 commented 5 months ago

Interesting, on a RTX 2080ti I can't even load the model in the jupyter notebook. Any idea why it runs for inference on a 3060 with 12GB while it can't even be loaded on the 2080ti with 11GB?

when I monitor GPU memory usage, it is constantly over 10/11 GB and inference sometimes breaks if I am running something else aside. So I guess 11 GB is just under the limit (sorry)

omkar-12bits commented 4 months ago

i tried setting img =1 for output but still i'm getting the OOM error , on colab . it fails after the processing from constant 9-10gb to error of allocatinhg 4.5gb ? why and how can i silence this error any idea ?