Closed tranthai2k2 closed 1 year ago
The issue seems to be a temporary regression within gradio or the webui itself. Could you try changing the last line of run_webui
to this !COMMANDLINE_ARGS="{other_args} {vae_args} {vram} --gradio-queue --gradio-auth {gradio_username}:{gradio_password}" REQS_FILE="requirements.txt" python launch.py
This works for me
def run_webui():
#@markdown Choose the vae you want
vae = "Anime (Anything 4)" #@param ["Anime (Anything 3)", "Anime (Anything 4)", "Anime (Waifu Diffusion 1.4)", "Stable Diffusion", "None"]
if vae == "Anime (Anything 3)":
!wget -c https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0.vae.pt
vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0.vae.pt"
if vae == "Anime (Anything 4)":
!wget -c https://huggingface.co/andite/anything-v4.0/resolve/main/anything-v4.0.vae.pt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.vae.pt
vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0.vae.pt"
if vae == "Anime (Waifu Diffusion 1.4)":
!wget -c https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime.ckpt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/kl-f8-anime.vae.pt
vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/kl-f8-anime.vae.pt"
if vae == "Stable Diffusion":
!wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt -O {root_dir}/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5.vae.pt
vae_args = "--vae-path " + root_dir + "/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5.vae.pt"
%cd {root_dir}/stable-diffusion-webui/
!COMMANDLINE_ARGS="{other_args} {vae_args} {vram} --gradio-queue --gradio-auth {gradio_username}:{gradio_password}" REQS_FILE="requirements.txt" python launch.py
i think the problem is with !git clone https://github.com/acheong08/stable-diffusion-webui it's full but not stable and camenduru's it's stable but can't add autotag nor Lora and acheong08's it's unstable but it's both lora and autotag I'm sorry the code you edited and added is awesome it's stable and awesome if yes I'll recommend to my friends about your model it's awesome
Mine clones the latest commit from the A1111 webui repo 🤔
From what I can see, acheong08 has not pushed any commits of his own recently. He's been merging upstream changes
I have removed --gradio-queue
since somebody else also reported an issue right after I pushed the fix.
https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing#scrollTo=fAsaOpxoT-PC This model I tweaked to my liking, it's quite stable, but if possible, I still hope your model is updated to help stabilize the image export so it won't fail without image. Thank you for your model hope to have more models
Even after creating an image, it still can't be retrieved, but the old image has been created
I am not experiencing such an issue. Hmm
Can you see the images in the gallery tab?
my error times
I could reproduce the issue using your notebook and it was solved by adding --gradio-queue
to launch args
I've pushed a new commit using gradio queue with tag complete extensions checkbox and lora as well. Test it out
Error completing request
Arguments: ('task(mwkjn3p8ul6fi5w)', 'masterpiece, best quality, twintails, wide sleeves, hands on hips, hand on hip, breasts, 1girl, dress, solo, clothing cutout, thighhighs, cleavage, chinese clothes, rating:safe, pelvic curtain, mole on breast, large breasts, mole on thigh , black hair, china dress, blush, smile, cleavage cutout, short hair, bare shoulders, blue sky, looking at viewer, covered navel, no panties, focused, upright, thigh-high, opposite, volumetric light, good light,, masterpiece, best quality, very detailed, wallpaper 8k cg unity extremely detailed, illustrations,((beautifully detailed) face) ), best quality, (((super detailed ))) , high quality, high resolution illustrations, high resolution , side light, ((best illustration)), high resolution, illustration, absurd, super detailed, intricate detail, perfect , highly detailed eyes ,yellow eyes, perfect light, (CG:1.2 color is extremely detailed),((bangs covering one eye))', 'nsfw, loli, small breasts, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, worst quality, low quality, (worst quality, low quality, extra digits, loli, loli face:1.3)', [], 32, 16, False, False, 2, 2, 7, 324660525.0, -1.0, 0, 0, 0, False, 648, 584, False, 0.7, 2, 'Latent', 0, 0, 0, 0, False, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "/content/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, kwargs))
File "/content/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, *kwargs)
File "/content/stable-diffusion-webui/modules/txt2img.py", line 52, in txt2img
processed = process_images(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 476, in process_images
res = process_images_inner(p)
File "/content/stable-diffusion-webui/modules/processing.py", line 614, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/content/stable-diffusion-webui/modules/processing.py", line 809, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 544, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 447, in launch_sampling
return func()
File "/content/stable-diffusion-webui/modules/sd_samplers.py", line 544, in
The UI at least starts, doesn't throw an error because --gradio-queue
is present. This is an issue with xformers then. I noticed your initial version of the notebook you sent used xformers 0.0.15 whereas mine was updated to 0.0.16 recently. Might be worth trying to use the previous version then.
-->
!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16+814314d.d20230118-cp38-cp38-linux_x86_64.whl
!pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15+e163309.d20230103-cp38-cp38-linux_x86_64.whl
Or you could just disable xformers since I cannot guarantee it will work.
I don't know why after I press run in drive it works very smoothly and has a very good image but I haven't pressed it before but it doesn't show the image If possible, you can adjust it to save only images, don't run in drive, it saves all the files to the drive, so it's a bit heavy it stays the same unless run in drive
The issue is still out of my control. Though I will add an option for just saving images to gdrive nonetheless.
Sorry if the wrong position i make pictures with lora it works great your model is great However, I don't know why your model is so good that it doesn't show the image even though in colab finished creating the image, but in the web, there is no image, hope you can help me https://colab.research.google.com/drive/1iwLtfEeoUTTVFZ08iVkvJ5jBhwKcspty?usp=sharing I just tweaked it to make it easier for me but the picture doesn't show up