AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
138.72k stars 26.33k forks source link

[Bug]: Generation process is freezing [working wrong] if i try to use lora on sdxl model/ValueError: gradio.queueing.Event is not in list. #14412

Closed Aflexg closed 1 month ago

Aflexg commented 8 months ago

Checklist

What happened?

Generation process is freezing on 95 percent for 1-10 minutes or even forever if i try to use lora on sdxl model. Example1: txt to img; sdxl_offset_example_lora ; sdxl_vae.safetensors ; crystal clear checkpoint (In this case generation was freezing forever and i had to reboot my pc) example1

(Sorry for the bad quality of some images; sometimes my pc was completely frozen and i could do screenshots)

Example2:img to img; EdobHorrorLandscape lora ; sdxl_vae.safetensors ; crystal clear checkpoint; (In this case generation was freezing for 1 minute ) exmaple2

Example3: Same with example 1 (except vae), but with a different result. After 1 minute of the freezing i got the blank square instead of the generated image.Also i couldnt use the generation botton before the reboot anymore.(But this time i didnt get any errors)

3

image

Example4: Same with example 3, but with a different result.After 1 minute of the freezing i got the blank square instead of the generated image.Also i couldnt use the generation botton before the reboot anymore, but ,besides this, i got the error: error  png

Steps to reproduce the problem

1.download last version fo automatic1111, 2.download this checkpoint : https://civitai.com/models/122822/crystal-clear-xl 3.download sd_xl_base_1.0.safetensors 4.download https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/tree/main

Put the models in the \webui\models\Stable-diffusion folder.

5.download https://huggingface.co/stabilityai/sdxl-vae/blob/main/sdxl_vae.safetensors

Rename it as: sd_xl_base_1.0_vae.safetensors

Put it in the \webui\models\VAE folder.

6.download https://civitai.com/models/137511/sdxl-offset-example-lora Put it in the \webui\models\Lora folder.

7.Select the crystal-clear-xl checpoint 8.Select VAE none 8.Type the promt : dog, ; 9.Generate with parameters: 1024x1024 DPM++2M Karas 30 steps/ 512x512 DPM++2M Karas 20 steps (This generation should be the first after the start of webui) 10.Select VAE sd_xl_base_1.0_vae.safetensors 11..Type the promt : dog, ; 12.Generate with parameters: 1024x1024 DPM++2M Karas 30 steps/ 512x512 DPM++2M Karas 20 steps (This generation should be the first after the start of webui)

Import additionat detail: It usually freezes, but sometimes it works fine and unfortunatelly, i couldnt find the strict pattern of this phenomena. But i noticed, that 30+ steps 1024x1024+ generation parameters in combination with the attempt of the lora generation directly after the start (without any generatinos before) cause the problems most often.

What should have happened?

Webui should give me the pictrue of a dog in a 30-60 sec (because without lora stable diffusion does exactly that) without any freezes and errors.

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2023-12-23-18-31.json

Console logs

Log from the example 3:

venv "F:\CivitaAi\webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.7.0
Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e
Launching Web UI with arguments: --xformers --medvram
[-] ADetailer initialized. version: 23.11.1, num models: 9
[AddNet] Updating model hashes...
100%|████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 2401.20it/s]
[AddNet] Updating model hashes...
100%|████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 6004.01it/s]
Loading weights [0b76532e03] from F:\CivitaAi\webui\models\Stable-diffusion\crystalClearXL_ccxl (1).safetensors
F:\CivitaAi\webui\extensions\stable-diffusion-webui-two-shot\scripts\two_shot.py:397: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  canvas_image = gr.Image(source='upload', mirror_webcam=False, type='numpy', tool='color-sketch',
F:\CivitaAi\webui\extensions\stable-diffusion-webui-two-shot\scripts\two_shot.py:471: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  visual_regions = gr.Gallery(label="Regions").style(grid=(4, 4, 4, 8), height="auto")
F:\CivitaAi\webui\extensions\stable-diffusion-webui-two-shot\scripts\two_shot.py:471: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead.
  visual_regions = gr.Gallery(label="Regions").style(grid=(4, 4, 4, 8), height="auto")
F:\CivitaAi\webui\extensions\PBRemTools\scripts\main.py:47: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  gallery = gr.Gallery(label="outputs", show_label=True, elem_id="gallery").style(grid=2, object_fit="contain")
F:\CivitaAi\webui\extensions\PBRemTools\scripts\main.py:47: GradioDeprecationWarning: The 'grid' parameter will be deprecated. Please use 'columns' in the constructor instead.
  gallery = gr.Gallery(label="outputs", show_label=True, elem_id="gallery").style(grid=2, object_fit="contain")
F:\CivitaAi\webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:399: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  with gr.Row().style(equal_height=False):
F:\CivitaAi\webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:521: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  cover_image = gr.Image(
Creating model from config: F:\CivitaAi\webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 52.3s (prepare environment: 11.0s, import torch: 8.3s, import gradio: 4.1s, setup paths: 5.1s, initialize shared: 0.5s, other imports: 4.1s, setup codeformer: 0.4s, load scripts: 15.4s, create ui: 2.3s, gradio launch: 0.7s).
Loading VAE weights specified in settings: F:\CivitaAi\webui\models\VAE\sd_xl_base_1.0_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 33.7s (load weights from disk: 2.6s, create model: 0.7s, apply weights to model: 22.1s, apply half(): 2.4s, load VAE: 2.2s, move model to device: 0.2s, hijack: 0.6s, load textual inversion embeddings: 0.8s, calculate empty prompt: 2.1s).
Restoring base VAE
Applying attention optimization: xformers... done.
VAE weights loaded.
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:17<00:00,  1.14it/s]
==========================================================================================0/20 [00:03<00:00,  6.41it/s]
A tensor with all NaNs was produced in VAE.
Web UI will now convert VAE into 32-bit float and retry.
To disable this behavior, disable the 'Automatically revert VAE to 32-bit floats' setting.
To always start with 32-bit VAE, use --no-half-vae commandline flag.
==========================================================================================
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [02:06<00:00,  6.33s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [02:06<00:00,  6.41it/s]

Additional information

I dont have any problems on the SDXL models if i dont try to use lora and i dont have any problems on the SD1.5 models even if use lora.

Jareth329 commented 7 months ago

I have been experiencing the same issue, some notes:

I have also been experiencing a separate(?) issue where using a lora causes my pc to hit ~15.5gb of ram usage with nothing else open. Specifically it will spike by ~5gb during/after the final generation step. This freezes my computer even if I have anything else open (HDD). Sometimes it will drop its ram usage to ~6gb (total in use on my pc) before starting the generation process. This means that when it spikes by ~5gb at the end it is still only at 10-11gb, and does not cause issues. It does not do this consistently though and seems to fail to do it more when using loras (it might even always fail the first time a lora is used).

ShenkiIndigo commented 6 months ago

I faced the same problem, a quick skip of the last step helped me, but then I used a different vae and the bug disappeared https://civitai.com/models/140686/fix-fp16-errors-sdxl-lower-memory-use-sdxl-vae-fp16-fix-by-madebyollin