Closed daranil closed 1 year ago
the error message reports that you have only 2gb vram on gpu. that's extremely low, i'll be surprised if you manage to run the process even for 1 sample on such device. if that's the only local hardware option you've got, use online colab version instead - the instances usually have 16gb, it's enough for few hundred samples (if the resolution is not too large). amount of steps affects only processing time.
Hi @eps696
I am keep on getting below error. I am unable to run the code for 30 samples and 30 steps too.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 2.00 GiB total capacity; 1.58 GiB already allocated; 0 bytes free; 1.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Can you please help me how to resolve the issue. I spent almost a day of time to resolve issue but I am helpless.
I would like to run the model with 500 samples and steps to see a perfect image generated which gives me immense happy of success.
Looking forward to hear from you.
Thank you.