anapnoe / stable-diffusion-webui-ux

Stable Diffusion web UI UX
GNU Affero General Public License v3.0
970 stars 58 forks source link

[Bug]: Prompt and Negative Prompt field sometimes locked #109

Closed Linaqruf closed 1 year ago

Linaqruf commented 1 year ago

Is there an existing issue for this?

What happened?

Hi, thanks for the good work.

I have a problem recently, prompt and negative prompt sometimes locked and I can't either type or edit them.

image

Steps to reproduce the problem

  1. run this notebook https://colab.research.google.com/github/Linaqruf/sd-notebook-collection/blob/main/cagliostro-colab-ui.ipynb
  2. use default value
  3. install illuminati v1.1
  4. i use cloudflared tunnels when facing this issue
  5. generate
  6. sometimes it won't allow you to type the prompt/negative prompt

What should have happened?

it can be edited

Commit where the problem happens

079a5e4d8c8a3a4de43f75a7f64a8f363d02a8c1

What platforms do you use to access the UI ?

Linux

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

-enable-insecure-extension-access --disable-safe-unpickle --xformers --multiple --share --gradio-auth=cagliostro:9CTsjq --no-half-vae --lowram --no-hashing --disable-console-progressbars --opt-sub-quad-attention --opt-channelslast --theme=dark --ckpt-dir=/content/cagliostro-colab-ui/models/Stable-diffusion --vae-dir=/content/cagliostro-colab-ui/models/VAE --hypernetwork-dir=/content/cagliostro-colab-ui/models/hypernetworks --embeddings-dir=/content/cagliostro-colab-ui/embeddings --lora-dir=/content/cagliostro-colab-ui/models/Lora --no-download-sd-model --gradio-queue

List of extensions

"https://github.com/hnmr293/sd-webui-cutoff",
"https://github.com/KohakuBlueleaf/a1111-sd-webui-locon",
"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git",
"https://github.com/etherealxx/batchlinks-webui",
"https://github.com/mcmonkeyprojects/sd-dynamic-thresholding",
"https://github.com/kohya-ss/sd-webui-additional-networks.git",
"https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git",
"https://github.com/Mikubill/sd-webui-controlnet",
"https://github.com/camenduru/sd-webui-tunnels",
"https://github.com/bbc-mc/sdweb-merge-block-weighted-gui.git",
"https://github.com/bbc-mc/sdweb-xyplus",
f"https://github.com/opparco/stable-diffusion-webui-composable-lora.git",
f"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser.git",
f"https://github.com/arenatemp/stable-diffusion-webui-model-toolkit",
f"https://github.com/ashen-sensored/stable-diffusion-webui-two-shot",
"https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git",
f"https://github.com/klimaleksus/stable-diffusion-webui-fix-image-paste",
"https://github.com/derrian-distro/sd_webui_stealth_pnginfo",
"https://github.com/hnmr293/sd-webui-llul",
"https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris",
"https://github.com/hako-mikan/sd-webui-regional-prompter",
"https://github.com/camenduru/sd-civitai-browser",

Console logs

I don't think console logs could help to fix UI problem

Python 3.10.11 (main, Apr  5 2023, 14:15:10) [GCC 9.4.0]
Commit hash: 079a5e4d8c8a3a4de43f75a7f64a8f363d02a8c1
Installing requirements for Web UI

Launching Web UI with arguments: --enable-insecure-extension-access --disable-safe-unpickle --xformers --multiple --share --gradio-auth=cagliostro:9CTsjq --no-half-vae --lowram --no-hashing --disable-console-progressbars --opt-sub-quad-attention --opt-channelslast --theme=dark --ckpt-dir=/content/cagliostro-colab-ui/models/Stable-diffusion --vae-dir=/content/cagliostro-colab-ui/models/VAE --hypernetwork-dir=/content/cagliostro-colab-ui/models/hypernetworks --embeddings-dir=/content/cagliostro-colab-ui/embeddings --lora-dir=/content/cagliostro-colab-ui/models/Lora --no-download-sd-model --gradio-queue
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
ControlNet v1.1.116
ControlNet v1.1.116
all detected, remote.moe trying to connect...
Warning: Permanently added 'localhost.run,54.161.197.247' (RSA) to the list of known hosts.
Warning: Permanently added 'remote.moe,159.69.126.209' (ECDSA) to the list of known hosts.
all detected, cloudflared trying to connect...
Download cloudflared...: 100% 34.9M/34.9M [00:00<00:00, 257MB/s]
Checkpoint anylora.safetensors not found; loading fallback illuminati_diffusion_v1_1.safetensors
Loading weights [None] from /content/cagliostro-colab-ui/models/Stable-diffusion/illuminati_diffusion_v1_1.safetensors
Creating model from config: /content/cagliostro-colab-ui/repositories/stable-diffusion-stability-ai/configs/stable-diffusion/v2-inference-v.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Loading VAE weights specified in settings: /content/cagliostro-colab-ui/models/VAE/anime.vae.pt
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(3): wdbadprompt, re-badprompt, rev2-badprompt
Textual inversion embeddings skipped(8): bad-hands-5, ng_deepnegative_v1_75t, bad_prompt, EasyNegative, bad_prompt_version2, bad-artist-anime, bad-artist, bad-image-v2-39000
Model loaded in 28.1s (load weights from disk: 11.7s, find config: 5.0s, create model: 0.4s, apply weights to model: 3.1s, apply channels_last: 1.2s, apply half(): 1.6s, load VAE: 4.1s, move model to device: 0.8s, load textual inversion embeddings: 0.1s).
Public WebUI Colab URL: http://sb2k27x2r6r4ps46i2qh4do3md7boqsc5fgljjs35hsl6zdrkpua.remote.moe 
Public WebUI Colab URL: https://a458b0b3-9dfd-4a5e.gradio.live 
Public WebUI Colab URL: https://0a5559d21fcc44.lhr.life
Please do not use this link we are getting ERROR: Exception in ASGI application:  https://4e1bfff791c9d3432d.gradio.live
Public WebUI Colab URL: https://queen-siemens-booty-bus.trycloudflare.com
Startup time: 61.7s (import torch: 10.6s, import gradio: 1.3s, import ldm: 2.3s, other imports: 1.7s, setup codeformer: 0.2s, load scripts: 11.1s, load SD checkpoint: 28.1s, create ui: 1.6s, gradio launch: 4.9s).
Textual inversion embeddings loaded(6): wdbadprompt, re-badprompt, nartfixer, nfixer, rev2-badprompt, nrealfixer
Textual inversion embeddings skipped(8): bad-hands-5, ng_deepnegative_v1_75t, bad_prompt, EasyNegative, bad_prompt_version2, bad-artist-anime, bad-artist, bad-image-v2-39000
100% 20/20 [00:07<00:00,  2.51it/s]
100% 20/20 [00:04<00:00,  4.43it/s]
Loading model: canny-sd21-safe [4ac9f628]
Loaded state_dict from [/content/cagliostro-colab-ui/models/ControlNet/canny-sd21-safe.safetensors]
Loading config: /content/cagliostro-colab-ui/models/ControlNet/canny-sd21-safe.yaml
ControlNet model canny-sd21-safe [4ac9f628] loaded.
Loading preprocessor: canny
preprocessor resolution = 512
100% 20/20 [00:06<00:00,  2.88it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 1024
100% 20/20 [00:06<00:00,  3.09it/s]
clear_alpha called
clear_alpha called
clear_alpha called
100% 12/12 [00:02<00:00,  4.32it/s]
100% 15/15 [00:03<00:00,  4.41it/s]
100% 20/20 [00:04<00:00,  4.30it/s]
100% 20/20 [00:04<00:00,  4.32it/s]
100% 20/20 [00:04<00:00,  4.46it/s]
100% 20/20 [00:04<00:00,  4.48it/s]
100% 20/20 [00:04<00:00,  4.32it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 512
100% 20/20 [00:06<00:00,  3.11it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 512
100% 20/20 [00:06<00:00,  3.06it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 512
100% 20/20 [00:06<00:00,  3.12it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 512
100% 20/20 [00:06<00:00,  3.15it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 1024
100% 20/20 [00:06<00:00,  3.06it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 1024
100% 20/20 [00:06<00:00,  3.13it/s]
Loading model from cache: canny-sd21-safe [4ac9f628]
Loading preprocessor: canny
preprocessor resolution = 1024
100% 20/20 [00:06<00:00,  3.27it/s]

Additional information

I also have a problem when generating images with hires fix. It sometimes stuck at 98% and couldn't save or show the result. It doesn't happened in Auto1111 and Vlad's fork as far as i know.

anapnoe commented 1 year ago

you are welcome to join our dev discord server we would be more that happy to have you onboard https://discord.gg/R46Xmx8B does this blue dot remains visible on the other input fields on update or only in prompt fields? does this happen on initialization only if you click on the magic wand or clear button does the issue persists - it freezes the hole application I want to be sure that only the view - response is having the problem updating the component

mart-hill commented 1 year ago

For me it happened right after I used "wand" button, and only prompt fields became locked with this "LED" light. I was able to generate an image with restored session prompt, but after one picture, I changed the model with "extra networks" overlay, and then, the UI locked itself out after I pressed Generate (checkpoint did completely load). Apparently, it's this "race-condition" bug, which is described here. I'll update all the extensions, and recheck, if the prompt fields lock-up remains after I press "wand" button.

I'm on Windows 10 22H2, RTX 3090.

Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Commit hash: 079a5e4d8c8a3a4de43f75a7f64a8f363d02a8c1

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 192.2s (import torch: 60.3s, import gradio: 13.9s, import ldm: 6.5s, other imports: 12.6s, list SD models: 0.7s, setup codeformer: 1.0s, list builtin upscalers: 0.4s, load scripts: 17.7s, load SD checkpoint: 18.0s, create ui: 60.0s, gradio launch: 1.0s, scripts app_started_callback: 0.1s).
Loading model: control_v11u_sd15_tile [1f041471]
Loaded state_dict from [X:\AI\stable-diffusion-webui-ux\models\ControlNet\control_v11u_sd15_tile.pth]
Loading config: X:\AI\stable-diffusion-webui-ux\models\ControlNet\control_v11u_sd15_tile.yaml
ControlNet model control_v11u_sd15_tile [1f041471] loaded.
Loading preprocessor: tile_resample
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.OUTER_FIT
raw_H = 1800
raw_W = 1800
target_H = 1400
target_W = 928
estimation = 928.0
preprocessor resolution = 896
100%|██████████████████████████████████████████████████████████████████████████████| 40/40 [00:20<00:00,  1.94it/s]
Loading CLiP model ViT-L/14
100%|██████████████████████████████████████████████████████████████████████████████| 40/40 [00:28<00:00,  1.39it/s]
Loading weights [1bb0e48cf8] from X:\AI\stable-diffusion-webui-ux\models\Stable-diffusion\aya_18030-fixed.safetensors
Creating model from config: X:\AI\stable-diffusion-webui-ux\models\Stable-diffusion\aya_18030-fixed.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui-ux\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying xformers cross attention optimization.
Model loaded in 9.8s (create model: 0.6s, apply weights to model: 5.5s, apply half(): 0.7s, load VAE: 0.2s, move model to device: 1.2s, load textual inversion embeddings: 1.4s).
**[I pressed GENERATE here]**

Edit: Apparently, updating the extensions helped resolve the issue with locked prompt fields for me. I'll test it more.

FutonGama commented 1 year ago

I fixed. Overwrite it on the stable diffusion-ux root folder. (Make a backup first just incase) Style CSS Prompt Block HotFix.zip

I changed on Style.css a few things.

This:

[id$="2img_token_counter"].block, [id$="2img_negative_token_counter"].block { position:absolute !important; text-align:right; z-index:99; }

Changed to this:

.block.token-counter{ position:absolute !important; text-align:right; z-index:99; }

.block.token-counter div{ display: inline; }

.block.token-counter span{ padding: 0.1em 0.75em; }

This will help until get full fixed by the anapnoe. Idk if can cause problems with that but its working good here. The UI will continue the same visual with the fix.

anapnoe commented 1 year ago

close the issue if this is still happening I will reopen the issue and look more into it

FutonGama commented 1 year ago

Its happening again on the version but only when the image are generating. Its very annoying. Also in new version is giving problems with embeddings, my embeddings don't work. I have created a subfolder on Loras and put the embeddings there for make it work. Its giving some dictionary error, something like that.

FutonGama commented 1 year ago

My fix worked again on this version, THANK GOD, if you want to include it anapnoe it's all yours.