[ ] The issue exists after disabling all extensions
[ ] The issue exists on a clean installation of webui
[ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
[X] The issue exists in the current version of the webui
[ ] The issue has not been reported before recently
[ ] The issue has been reported before but has not been fixed yet
What happened?
I haven't opened this app in the last 10 days. I understand that it has been updated. So, I decided to generate a 768×1280 photo as I did 10 days ago. I only had "set COMMANDLINE_ARGS=--medvram" in that folder because I have an Nvidia GeForce GTX 1650 with 4GB of VRAM. Everything worked fine, no matter what it was generating for a long time. And now, when generating, it first said that "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card doesn't support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use the - -disable-nan-check commandline argument to disable this check." I tried first "Upcast cross attention layer to float32" and then added this "--no-half --disable-nan-check" and in both cases it started to say something like: "CUDA out of memory. Tried to allocate 960.00 MiB (GPU 0; 4.00 GiB total capacity; 1.50 GiB already allocated; 630.64 MiB free; 1.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF".
Steps to reproduce the problem
launch webui.bat, txt2img, wrote "girl" in positive prompts, A tensor with all NaNs was produced in Unet, close, edit webui.bat to --lowvram --no-half --disable-nan-check, launch, txt2img, wrote "girl" in positive prompts, CUDA out of memory.
What should have happened?
For example, generate a photo like this without any problems
Checklist
What happened?
I haven't opened this app in the last 10 days. I understand that it has been updated. So, I decided to generate a 768×1280 photo as I did 10 days ago. I only had "set COMMANDLINE_ARGS=--medvram" in that folder because I have an Nvidia GeForce GTX 1650 with 4GB of VRAM. Everything worked fine, no matter what it was generating for a long time. And now, when generating, it first said that "A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card doesn't support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use the - -disable-nan-check commandline argument to disable this check." I tried first "Upcast cross attention layer to float32" and then added this "--no-half --disable-nan-check" and in both cases it started to say something like: "CUDA out of memory. Tried to allocate 960.00 MiB (GPU 0; 4.00 GiB total capacity; 1.50 GiB already allocated; 630.64 MiB free; 1.78 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF".
Steps to reproduce the problem
launch webui.bat, txt2img, wrote "girl" in positive prompts, A tensor with all NaNs was produced in Unet, close, edit webui.bat to --lowvram --no-half --disable-nan-check, launch, txt2img, wrote "girl" in positive prompts, CUDA out of memory.
What should have happened?
For example, generate a photo like this without any problems
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2023-12-24-02-58.json
Console logs
Additional information
The last thing I can remember is that I used the "StableDiffusion InvokeAI Base Cloud version" in GoogleCollab