anapnoe / stable-diffusion-webui-ux

Stable Diffusion web UI UX
GNU Affero General Public License v3.0
964 stars 60 forks source link

[Bug]: Token Counter covers entire prompt text box during calculation #180

Closed Nyksia closed 8 months ago

Nyksia commented 1 year ago

Is there an existing issue for this?

What happened?

When generating an image, if you edit the prompt and it attempts to recalculate the tokens, the token counter div element overlaps the entire text area, making it impossible to click into it. With some DevTools exploration, I've been able to find that it was specifically the token count div element that was doing it, as hiding it allows to click into the text box.

EDIT: Also the settings option to disable the token counters doesn't actually work.

Steps to reproduce the problem

  1. Generate an image
  2. While image is generating, edit the prompt
  3. Attempt to click into the prompt text box and fail to do so

What should have happened?

You should be able to click into the prompt text box and edit it even during the generation

Version or Commit where the problem happens

https://github.com/anapnoe/stable-diffusion-webui-ux/commit/3843d60ae7a692932db9be5ca33625382be32ee9

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

xformers

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

--api --xformers --upcast-sampling --listen --enable-insecure-extension-access

List of extensions

Extension | URL | Branch | Version | Date | Update -- | -- | -- | -- | -- | -- a1111-sd-webui-lycoris | https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris.git | main | 025dea96 | Wed Jun 14 13:56:41 2023 | unknown a1111-sd-webui-tagcomplete | https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git | main | 7f188563 | Sun Jun 25 10:34:57 2023 | unknown adetailer | https://github.com/Bing-su/adetailer.git | main | 8bf5b628 | Fri Jun 30 03:17:30 2023 | unknown auto-sd-paint-ext | https://github.com/Interpause/auto-sd-paint-ext | main | 00714355 | Wed May 3 12:34:30 2023 | unknown clip-interrogator-ext | https://github.com/pharmapsychotic/clip-interrogator-ext.git | main | c0bf9005 | Tue Jun 27 19:06:31 2023 | unknown multidiffusion-upscaler-for-automatic1111 | https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git | main | de488810 | Sun Jun 18 21:39:44 2023 | unknown sd-dynamic-thresholding | https://github.com/mcmonkeyprojects/sd-dynamic-thresholding.git | master | c8197531 | Sat Jul 1 04:22:51 2023 | unknown sd-webui-ar | https://github.com/alemelis/sd-webui-ar.git | main | 9df49dc2 | Wed Apr 12 09:23:17 2023 | unknown sd-webui-aspect-ratio-helper | https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git | main | 99fcf9b0 | Sun Jun 4 15:39:07 2023 | unknown sd-webui-controlnet | https://github.com/Mikubill/sd-webui-controlnet.git | main | 30cc2ec8 | Sat Jul 1 20:14:06 2023 | unknown sd-webui-infinite-image-browsing | https://github.com/zanllp/sd-webui-infinite-image-browsing.git | main | 56b9c1d4 | Sat Jul 1 10:42:00 2023 | unknown sd-webui-llul | https://github.com/hnmr293/sd-webui-llul.git | master | aa47b3ee | Thu May 4 16:14:34 2023 | unknown sd-webui-model-converter | https://github.com/Akegarasu/sd-webui-model-converter.git | main | 2a3834d7 | Wed Jun 28 13:41:44 2023 | unknown sd-webui-openpose-editor | https://github.com/huchenlei/sd-webui-openpose-editor.git | main | 58acd347 | Sat Jul 1 20:01:19 2023 | unknown sd-webui-regional-prompter | https://github.com/hako-mikan/sd-webui-regional-prompter.git | main | 18a512a9 | Sat Jul 1 14:41:22 2023 | unknown sd-webui-supermerger | https://github.com/hako-mikan/sd-webui-supermerger.git | main | be03b81d | Sat Jul 1 16:44:17 2023 | unknown stable-diffusion-webui-rembg | https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg.git | master | 3d9eedbb | Sun Jun 4 13:35:24 2023 | unknown stable-diffusion-webui-state | https://github.com/ilian6806/stable-diffusion-webui-state.git | main | 85d8ef19 | Sat Jun 10 12:39:50 2023 | unknown ultimate-upscale-for-automatic1111 | https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git | master | c99f382b | Tue Jun 13 04:29:35 2023 | unknown LDSR | built-in | None |   | Sun Jul 2 05:30:34 2023 |   Lora | built-in | None |   | Sun Jul 2 05:30:34 2023 |   ScuNET | built-in | None |   | Sun Jul 2 05:30:34 2023 |   SwinIR | built-in | None |   | Sun Jul 2 05:30:34 2023 |   canvas-zoom-and-pan | built-in | None |   | Sun Jul 2 05:30:34 2023 |   prompt-bracket-checker | built-in | None |   | Sun Jul 2 05:30:34 2023 |   sd_theme_editor | built-in | None |   | Sun Jul 2 05:30:34 2023 |  

Console logs

venv "C:\Stable Diffusion\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: ## 1.4.0
Commit hash: 3843d60ae7a692932db9be5ca33625382be32ee9
Installing requirements

Launching Web UI with arguments: --api --xformers --upcast-sampling --listen --enable-insecure-extension-access
[-] ADetailer initialized. version: 23.6.4, num models: 8
2023-07-02 08:29:12,808 - ControlNet - INFO - ControlNet v1.1.227
ControlNet preprocessor location: C:\Stable Diffusion\extensions\sd-webui-controlnet\annotator\downloads
2023-07-02 08:29:13,132 - ControlNet - INFO - ControlNet v1.1.227
Loading weights [6a7951c22d] from C:\Stable Diffusion\models\Stable-diffusion\Volatile Breaking Point v2.1.safetensors
Creating model from config: C:\Stable Diffusion\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: C:\Stable Diffusion\models\VAE\kl-f8-anime2.ckpt
Applying attention optimization: xformers... done.
Textual inversion embeddings loaded(15): aid28, aid291, bad-artist, bad-artist-anime, bad-hands-5, bad_prompt_version2, badv3, badv5, boring_e621, charturnerv2, deepnegative, deformityv6, easynegative, negative-hand, neutral_shylily
Model loaded in 11.7s (load weights from disk: 0.7s, create model: 1.1s, apply weights to model: 6.5s, apply half(): 1.0s, load VAE: 1.1s, move model to device: 0.9s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.1s).
preload_extensions_git_metadata for 26 extensions took 2.90s
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 42.2s (import torch: 10.1s, import gradio: 1.7s, import ldm: 1.0s, other imports: 2.9s, setup codeformer: 0.1s, list SD models: 0.2s, load scripts: 8.6s, scripts before_ui_callback: 0.1s, create ui: 12.6s, gradio launch: 4.7s).
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:14<00:00,  3.48it/s]
        Tile 1/20
        Tile 2/20
        Tile 3/20
        Tile 4/20
        Tile 5/20
        Tile 6/20
        Tile 7/20
        Tile 8/20
        Tile 9/20
        Tile 10/20
        Tile 11/20
        Tile 12/20
        Tile 13/20
        Tile 14/20
        Tile 15/20
        Tile 16/20
        Tile 17/20
        Tile 18/20
        Tile 19/20
        Tile 20/20
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:54<00:00,  1.09s/it]

Additional information

No response

Nyksia commented 1 year ago

With a bit more exploration, tracked it down specifically to .min.svelte style being applied to it, that sets its minimum height to var(--size-24)

NamelessButler commented 12 months ago

What seems to happen is that the Token Counter gets queued. That happens with negative prompt box as well. If you do anything that takes some times to load, like merging models, and change anything in the prompt box, it will get queued for after the current ongoing task ends.

Nyksia commented 11 months ago

What seems to happen is that the Token Counter gets queued. That happens with negative prompt box as well. If you do anything that takes some times to load, like merging models, and change anything in the prompt box, it will get queued for after the current ongoing task ends.

Oh no that alone wouldn't be the problem. It is the fact that a style gets applied to the token counter CSS element, causing its minimum size to be too large and overlap with the prompt box that is the issue. Because using DevTools and disabling that size override allows to edit the prompts just fine. If anything, it seems like an oversight with how token counter has that size override during recalculation specifically.

SirVeggie commented 11 months ago

A temporary workaround is to add the following rule to the user.css file:

.token-counter {
    pointer-events: none;
}

Edited based on the comments below

Nyksia commented 11 months ago

A temporary workaround is to add the following rule to the user.css file:

Nice, I knew there was a way to do that, but I just didn't know which file I should be modifying to jury rig a fix.

Nyksia commented 11 months ago

So few things to note:

  1. You may need to create the user.css file.
  2. The provided example of how to fix that would only apply to positive prompt's token counter, and only in txt2img. Better way to go about would be:
    .token-counter {
    pointer-events: none;
    }

    Using .token-counter class selector instead will make it apply to all token counters instead.