Open jtran-developer opened 1 year ago
I'm getting the same thing. did a clean install, changed model, VAE, lora, and upscaler. same result. and even switched to a premium gpu on colab to see if maybe Google bonked it on their end for the standard gpu. nothing seems to work.
try the runpod notebooks
In other words, the issue is on the A1111 side and once they fix it the fix will come through to the colab so hold tight? If so, cool, will do. If I'm misunderstanding, let me know. And it's just an annoyance right now rather than a showstopper so I'll just keep rolling with the colab rather than moving to runpod. Thanks again for the notebook!
if A1111 fixes it, it will reflect on the notebook
Thanks, I appreciate you looking into this for us.
Same problem since this morning, the colab wont let me generate any images at all, try everything. Ill be checking for updates soon. Thanks for the help @TheLastBen .
Same problem since this morning, the colab wont let me generate any images at all, try everything. Ill be checking for updates soon. Thanks for the help @TheLastBen .
This notebook is cloning the latest repository but does not have the problem of generating large images. Maybe you guys can use this as a temporary solution until it is fixed.
The downside of this notebook is that the setup can be confusing, and the initial setup can be slow.
Thanks @eskaviam! Theres no notebook with ControlNet like this one? Or theres a way to install ControlNet to this notebook? Would be great!
Anyone solved this problem? If there is a solution, please write
Anyone solved this problem? If there is a solution, please write
Still happening for me and I've tried everything I've seen on here or on the A1111 issue thread.
Still happening for me and I've tried everything I've seen on here or on the A1111 issue thread.
so do I.
According to the A1111 issue thread the problem may be with xformers and there is a fix that may be part of the 0.0.19 release.
Still happening for me and I've tried everything I've seen on here or on the A1111 issue thread.
so do I.
Ok the following fixed this, at least for my version of this problem: remove --xformers as a commandline argument and add --opt-sdp-attention. I did this at the bottom of the Start Stable Diffusion section of the colab notebook - show code then scroll all the way down. I made this change in each of the three versions of the command line although I don't really know what I'm doing and probably didn't have to. :). Downside is that xformers is more memory efficient so I can't upscale quite as large in hires fix now but at least it's sort of working.
edit: taking this back, please disregard
@buckwheaton Your fix has been implemented
I tried to add --lowvram and managed to generate 1536x1536 with free GPU without out of memory. Hopefully after xformers fix we could use it and save more vram.
Also I wanted to try to hire better GPU with more vram and generate biggest image possible and I have a question: is it possible to generate images bigger than 2048x2048?
you can make bigger than 2048x2048, but the coherence might not be there, use my runpod template for cheaper GPUs
I want to generate maps, so I hope image will just be extended
I mean how can I set width and height more than this limit here? Also can I ask where can I find your template?
I want to generate maps, so I hope image will just be extended
I mean how can I set width and height more than this limit here? Also can I ask where can I find your template?
outpainting would be your best choice
@Super-zapper https://www.runpod.io/console/gpu-secure-cloud?template=runpod-stable-unified
You can also click on the Runpod thumbnail on the main page of this repo, the rtx 3090 is only 0.3$/hour, better than the A100
I want to generate maps, so I hope image will just be extended I mean how can I set width and height more than this limit here? Also can I ask where can I find your template?
outpainting would be your best choice
Thanks, but I hoped to find less limited way
@Super-zapper https://www.runpod.io/console/gpu-secure-cloud?template=runpod-stable-unified
You can also click on the Runpod thumbnail on the main page of this repo, the rtx 3090 is only 0.3$/hour, better than the A100
Thank you
Hey it looks like the new xformers is out. It may resolve the nans errors and could be more memory efficient than --opt-sdp-attention? I'm out of my depth as far as trying it on mine but just wanted to share the news in case anyone here wanted to give it a shot.
I'll check it out EDIT: still not fixed
I am also getting this when using poor man's outpainting with SD XL checkpoints, adding the argument --no-half makes it working for me but it takes much higher s/it even to just expand 128 pixel on one side.
If I use the --disable-nan-check argument I got black output only...
replace --xformers
with --opt-sdp-attention
Here's the full error message
This all started a few days ago. There are no issues when I try to generate smaller images. 1280x1280 is fine, but if I were to go 1200x1800 or 1536x1536, that error is thrown. Basically, I can't hires fix moderately sized images anymore. I'm using the exact same model, loras, settings, prompts and everything else as I always have.
I've tried:
Maybe something was changed on colab's side? Any help would be appreciated.