AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.28k stars 25.97k forks source link

[Bug]: SendTo functionality is partially broken #7339

Open mart-hill opened 1 year ago

mart-hill commented 1 year ago

Is there an existing issue for this?

What happened?

Past parameters of image generations cannot be "sent" to the txt2img or img2img now:

image

This is accompanied by:

Traceback (most recent call last):
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
    output = await app.get_blocks().process_api(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
    result = await self.call_function(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 297, in paste_func
    params = parse_generation_parameters(prompt)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 264, in parse_generation_parameters
    v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v
IndexError: string index out of range
Traceback (most recent call last):
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
    output = await app.get_blocks().process_api(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
    result = await self.call_function(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 297, in paste_func
    params = parse_generation_parameters(prompt)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 264, in parse_generation_parameters
    v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v
IndexError: string index out of range

(I did it once for txt2img and img2img, from Image Browser, but the "previous session restore" also ends up like this, unless last WebUI session was "empty".

Steps to reproduce the problem

  1. Go to Image Browser or use "Paste" from last non-empty session of WebUI.
  2. Press SentTo (txt2img or img2img)
  3. Profit? Nope. 🙂

What should have happened?

Parameters of the generation should "land" into the respective places.

Commit where the problem happens

0a8515085ef258d4b76fdc000f7ed9d55751d6b8

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set TMP=X:\AI\TEMP
set TEMP=X:\AI\TEMP
set SAFETENSORS_FAST_GPU=1
REM set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:24
set COMMANDLINE_ARGS=--xformers --api --deepdanbooru

List of extensions

ABG_extension | https://github.com/KutsuyaYuki/ABG_extension.git | Loading... DiffusionDefender | https://github.com/WildBanjos/DiffusionDefender.git | Loading... DreamArtist-sd-webui-extension | https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git | Loading... Hypernetwork-MonkeyPatch-Extension | https://github.com/aria1th/Hypernetwork-MonkeyPatch-Extension | Loading... PromptGallery-stable-diffusion-webui | https://github.com/dr413677671/PromptGallery-stable-diffusion-webui.git | Loading... SD-latent-mirroring | https://github.com/dfaker/SD-latent-mirroring | Loading... StylePile | https://github.com/some9000/StylePile | Loading... Umi-AI | https://github.com/Klokinator/Umi-AI | Loading... a1111-sd-webui-haku-img | https://github.com/KohakuBlueleaf/a1111-sd-webui-haku-img.git | Loading... a1111-sd-webui-tagcomplete | https://github.com/DominikDoom/a1111-sd-webui-tagcomplete | Loading... asymmetric-tiling-sd-webui | https://github.com/tjm35/asymmetric-tiling-sd-webui.git | Loading... booru2prompt | https://github.com/Malisius/booru2prompt.git | Loading... custom-diffusion-webui | https://github.com/guaneec/custom-diffusion-webui.git | Loading... ddetailer | https://github.com/dustysys/ddetailer.git | Loading... embedding-inspector | https://github.com/tkalayci71/embedding-inspector.git | Loading... model-keyword | https://github.com/mix1009/model-keyword | Loading... multi-subject-render | https://github.com/Extraltodeus/multi-subject-render.git | Loading... novelai-2-local-prompt | https://github.com/animerl/novelai-2-local-prompt | Loading... prompt-fusion-extension | https://github.com/ljleb/prompt-fusion-extension.git | Loading... sd-dynamic-prompts | https://github.com/adieyal/sd-dynamic-prompts | Loading... sd-extension-steps-animation | https://github.com/vladmandic/sd-extension-steps-animation | Loading... sd-extension-system-info | https://github.com/vladmandic/sd-extension-system-info | Loading... sd-infinity-grid-generator-script | https://github.com/mcmonkeyprojects/sd-infinity-grid-generator-script.git | Loading... sd-webui-additional-networks | https://github.com/kohya-ss/sd-webui-additional-networks.git | Loading... sd-webui-gelbooru-prompt | https://github.com/antis0007/sd-webui-gelbooru-prompt.git | Loading... sd-webui-model-converter | https://github.com/Akegarasu/sd-webui-model-converter | Loading... sd-webui-multiple-hypernetworks | https://github.com/antis0007/sd-webui-multiple-hypernetworks.git | Loading... sd_dreambooth_extension | https://github.com/d8ahazard/sd_dreambooth_extension | Loading... sd_save_intermediate_images | https://github.com/AlUlkesh/sd_save_intermediate_images | Loading... sdweb-merge-block-weighted-gui | https://github.com/bbc-mc/sdweb-merge-block-weighted-gui | Loading... sdweb-merge-board | https://github.com/bbc-mc/sdweb-merge-board.git | Loading... seed_travel | https://github.com/yownas/seed_travel.git | Loading... shift-attention | https://github.com/yownas/shift-attention.git | Loading... stable-diffusion-webui-Prompt_Generator | https://github.com/imrayya/stable-diffusion-webui-Prompt_Generator | Loading... stable-diffusion-webui-aesthetic-gradients | https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients | Loading... stable-diffusion-webui-aesthetic-image-scorer | https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer | Loading... stable-diffusion-webui-artists-to-study | https://github.com/camenduru/stable-diffusion-webui-artists-to-study | Loading... stable-diffusion-webui-cafe-aesthetic | https://github.com/p1atdev/stable-diffusion-webui-cafe-aesthetic.git | Loading... stable-diffusion-webui-conditioning-highres-fix | https://github.com/klimaleksus/stable-diffusion-webui-conditioning-highres-fix.git | Loading... stable-diffusion-webui-daam | https://github.com/kousw/stable-diffusion-webui-daam.git | Loading... stable-diffusion-webui-dataset-tag-editor | https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor | Loading... stable-diffusion-webui-embedding-editor | https://github.com/CodeExplode/stable-diffusion-webui-embedding-editor.git | Loading... stable-diffusion-webui-images-browser | https://github.com/yfszzx/stable-diffusion-webui-images-browser | Loading... stable-diffusion-webui-inspiration | https://github.com/yfszzx/stable-diffusion-webui-inspiration | Loading... stable-diffusion-webui-instruct-pix2pix | https://github.com/Klace/stable-diffusion-webui-instruct-pix2pix.git | Loading... stable-diffusion-webui-pixelization | https://github.com/AUTOMATIC1111/stable-diffusion-webui-pixelization.git | Loading... stable-diffusion-webui-prompt-travel | https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel.git | Loading... stable-diffusion-webui-promptgen | https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen | Loading... stable-diffusion-webui-randomize | https://github.com/innightwolfsleep/stable-diffusion-webui-randomize | Loading... stable-diffusion-webui-sonar | https://github.com/Kahsolt/stable-diffusion-webui-sonar | Loading... stable-diffusion-webui-tokenizer | https://github.com/AUTOMATIC1111/stable-diffusion-webui-tokenizer.git | Loading... stable-diffusion-webui-visualize-cross-attention-extension | https://github.com/benkyoujouzu/stable-diffusion-webui-visualize-cross-attention-extension.git | Loading... stable-diffusion-webui-wd14-tagger | https://github.com/toriato/stable-diffusion-webui-wd14-tagger.git | Loading... stable-diffusion-webui-wildcards | https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards | Loading... training-picker | https://github.com/Maurdekye/training-picker | Loading... ultimate-upscale-for-automatic1111 | https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git | Loading... unprompted | https://github.com/ThereforeGames/unprompted | Loading... LDSR | built-in |
Lora | built-in |
ScuNET | built-in |
SwinIR | built-in |
prompt-bracket-checker | built-in

Console logs

venv "x:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Commit hash: 0a8515085ef258d4b76fdc000f7ed9d55751d6b8
Installing requirements for Web UI
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover

Installing requirements for scikit_learn

Installing requirements for Prompt Gallery

Installing sd-dynamic-prompts requirements.txt

#######################################################################################################
Initializing Dreambooth
If submitting an issue on github, please provide the below text for debugging purposes:

Python revision: 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Dreambooth revision: 9f4d931a319056c537d24669cb950d146d1537b0
SD-WebUI revision: 0a8515085ef258d4b76fdc000f7ed9d55751d6b8

Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[+] xformers version 0.0.16rc425 installed.
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.

#######################################################################################################

Installing requirements for dataset-tag-editor [onnxruntime-gpu]

Launching Web UI with arguments: --xformers --api --deepdanbooru
Loading booru2prompt settings
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
Hypernetwork-MonkeyPatch-Extension found!
SD-Webui API layer loaded
Installing pywin32
Error loading script: training_picker.py
Traceback (most recent call last):
  File "x:\AI\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "x:\AI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "x:\AI\stable-diffusion-webui\extensions\training-picker\scripts\training_picker.py", line 16, in <module>
    from modules.ui import create_refresh_button, folder_symbol
ImportError: cannot import name 'folder_symbol' from 'modules.ui' (x:\AI\stable-diffusion-webui\modules\ui.py)

Loading weights [4e4457c771] from x:\AI\sd_v1-5_vae.ckpt
Creating model from config: x:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: x:\AI\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(xxx): 1man, 2000ccplus, 3N1DS1NCL41R , 80s-anime-ai-being, 80s-anime-ai, 80s-car, albino_style, andava, ao_style-7500, ao_style, art by Smoose2, ...
Textual inversion embeddings skipped(xxx): AnalogFilm768-BW-Classic, AnalogFilm768-BW-Modern, AnalogFilm768-BW-Tintype, AnalogFilm768-BW-Vintage, AnalogFilm768-Old-School, AnalogFilm768, Apoc768, Art by Smoose-22, art by Smoose22, ...
Model loaded in 5.0s (create model: 0.5s, apply weights to model: 0.7s, apply half(): 0.6s, load VAE: 0.1s, move model to device: 0.9s, load textual inversion embeddings: 2.0s).
Textual inversion embeddings loaded(0):
Textual inversion embeddings loaded(0):
INFO:     Started server process [23800]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:5173 (Press CTRL+C to quit)
INFO:     ::1:18771 - "GET / HTTP/1.1" 200 OK
add tab
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Traceback (most recent call last):
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
    output = await app.get_blocks().process_api(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
    result = await self.call_function(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 297, in paste_func
    params = parse_generation_parameters(prompt)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 264, in parse_generation_parameters
    v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v
IndexError: string index out of range
Traceback (most recent call last):
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
    output = await app.get_blocks().process_api(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
    result = await self.call_function(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 297, in paste_func
    params = parse_generation_parameters(prompt)
  File "x:\AI\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 264, in parse_generation_parameters
    v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v
IndexError: string index out of range

Additional information

No response

EllangoK commented 1 year ago

Can you post the png info of that image

I am on 0a8515085ef258d4b76fdc000f7ed9d55751d6b8 as well and I see no issues.

mart-hill commented 1 year ago

Sure! It's practically every image I generated prior to this update. Even PNG Info tab, despite "allowing" to send the info to the txt2img tab, errors out (model selector is empty), generation of the same image fails as well:


Example parameters for the following error:

(extremely detailed CG unity 8k wallpaper), stunning, hdr, subsurface scattering, global illumination, film still, Film-like, bokeh, realism, pretty landscape \(woodland:1.22\) with thick greenery
Negative prompt: lowres, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, poorly drawn, crippled, crooked, broken, weird, odd, distorted, erased, cut, mutilated, sloppy, hideous, ugly, pixelated, aliasing, lowres
Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 720672102, Size: 640x1024, Model hash: e3ce7206, Batch size: 2, Batch pos: 1, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Hires upscale: 2, Hires upscaler: Latent
Expected one of:
        * <END-OF-FILE>

Previous tokens: Token('FREE_FLOAT', '1.2')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "x:\AI\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "x:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "x:\AI\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
    processed = process_images(p)
  File "x:\AI\stable-diffusion-webui\modules\processing.py", line 487, in process_images
    res = process_images_inner(p)
  File "x:\AI\stable-diffusion-webui\modules\processing.py", line 619, in process_images_inner
    c = get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, p.steps, cached_c)
  File "x:\AI\stable-diffusion-webui\modules\processing.py", line 573, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "x:\AI\stable-diffusion-webui\modules\prompt_parser.py", line 205, in get_multicond_learned_conditioning
    learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps)
  File "x:\AI\stable-diffusion-webui\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 15, in wrapper
    return function(*args, **kwargs, original_function=self.__original_functions[attribute])
  File "x:\AI\stable-diffusion-webui\extensions\prompt-fusion-extension\scripts\promptlang.py", line 25, in _hijacked_get_learned_conditioning
    tensor_builders = _parse_tensor_builders(prompts, total_steps)
  File "x:\AI\stable-diffusion-webui\extensions\prompt-fusion-extension\scripts\promptlang.py", line 41, in _parse_tensor_builders
    expr = parse_prompt(prompt)
  File "x:\AI\stable-diffusion-webui\extensions\prompt-fusion-extension\lib_prompt_fusion\prompt_parser.py", line 130, in parse_prompt
    return parse_expression(prompt.lstrip())
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\lark.py", line 625, in parse
    return self.parser.parse(text, start=start, on_error=on_error)
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\parser_frontends.py", line 96, in parse
    return self.parser.parse(stream, chosen_start, **kw)
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 41, in parse
    return self.parser.parse(lexer, start)
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 171, in parse
    return self.parse_from_state(parser_state)
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 188, in parse_from_state
    raise e
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\parsers\lalr_parser.py", line 178, in parse_from_state
    for token in state.lexer.lex(state):
  File "x:\AI\stable-diffusion-webui\venv\lib\site-packages\lark\lexer.py", line 537, in lex
    raise UnexpectedToken(token, e.allowed, state=parser_state, token_history=[last_token], terminals_by_name=self.root_lexer.terminals_by_name)
lark.exceptions.UnexpectedToken: Unexpected token Token('TEXT', '\\)') at line 1, column 273.
Expected one of:
        * $END
Previous tokens: [Token('FREE_FLOAT', '1.2')]

The source of this error is the notation \(woodland:1.22\), which wasn't an issue before (I had the a1111-sd-webui-tagcomplete extension installed, until, with the newer WebUI, it started hanging the whole webpage tab with 100% CPU thread busy)


Example from the OP error:

a woodland Negative prompt: blurry Steps: 30, Sampler: Euler a, CFG scale: 7.5, Seed: 2289365603, Size: 512x640, Model hash: 4e4457c771, Denoising strength: 0.55, Clip skip: 2, ENSD: 31337, Wildcard prompt: "on the road", File includes: , Hires resize: 896x1152, Hires upscaler: Latent (bicubic antialiased)

It probably has to do with UmiAI/unprompted/Wildcards syntax, I believe - it worked before though, like the \(xxx\) syntax.

⏫ This parameter text also was sent from PNG Info tab this time around.

mart-hill commented 1 year ago

I also noticed, that since the change, how the model hashes are handled, sometimes the models are not being loaded -Checkpoint xxxx.safetensors [5cbb645d04] not found; loading fallback 1111.safetensors [27cc94594f]- and then the alphabetically first model is being loaded as a fallback (which is a nice idea on its own). Of course, the model, that should be loaded, exists as a file, and usually, after shuffling with the model loading, I'm finally able to load the one from the parameters or chosen by hand. 🙂 It happens intermittently and deleting cache.json file while having WebUI running doesn't help. I'll test it with WebUI off as well.

gsgoldma commented 1 year ago

REM set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:24

what's that line do exactly in your webui-user.bat file?

REM set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:24
estilog commented 1 year ago

Same issue here with similar error:

Traceback (most recent call last): File "C:\Users\username\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict output = await app.get_blocks().process_api( File "C:\Users\username\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api result = await self.call_function( File "C:\Users\username\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\username\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\username\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\username\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\username\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 339, in paste_func params = parse_generation_parameters(prompt) File "C:\Users\username\stable-diffusion-webui\modules\generation_parameters_copypaste.py", line 263, in parse_generation_parameters v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v IndexError: string index out of range

mart-hill commented 1 year ago

REM set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:24

what's that line do exactly in your webui-user.bat file?

REM set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:24

I was trying to see, if this improves generation or training performance, I saw such setting in someone's post, he had RTX 3060, if I recall correctly, I have 3090, so it was pure curiosity on my side. This seems to make full use of VRAM at least during generating the image (I noticed that with Afterburner). Since I didn't see much of improvement, I "remarked" this line for later use, if anything new comes up. 🙂

testFaze commented 1 year ago

I'm getting the same error as mart-hill and estilog. This seems to have gone quiet - have you found a fix? If not why does no-one care about this, it seems quite a big deal? I'm not even sure why PNG Info > 'send to' works for some images and not others?

testFaze commented 1 year ago

Ok so I think I have worked this out (for me anyway) If you use the aesthetic embeddings extension the 'Aesthetic text:' field is often blank which crashes generation_parameters_copypaste.py" at line 263 this block needs to be modified as follows so it just ignores the blank field - then it works ok:

for k, v in re_param.findall(lastline): if len(v) == 0: res[k] = "" else: v = v[1:-1] if v[0] == '"' and v[-1] == '"' else v m = re_imagesize.match(v) if m is not None: res[k+"-1"] = m.group(1) res[k+"-2"] = m.group(2) else: res[k] = v

estilog commented 1 year ago

Thank you, testFaze! This seems to have fixed the issue.

Edit: Now the model does not get changed when the parameters are applied (even with the setting 'When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint.' unchecked)

mart-hill commented 1 year ago

I noticed that generations using UmiAI (wildcards like <[pool]>, which are being "converted" to randomly picked variants of pool facilities and positioning at runtime for the prompt - that's UmiAI) also will cause the "SendTo" bug shown in the OP. I just tested it with Image Browser (this maintained version). To use this extension, I disabled stable-diffusion-webui-wildcards and unprompted extensions.

Example of a data, that would cause that bug:

realistic, highest quality, ((scifi)), lens flare, ((light sparkles)), digital painting, trending on ArtStation, trending on CGSociety, intricate, high detail, dramatic, realism, beautiful and detailed lighting, shadows, best quality, highly detailed, 8k, stunning, hdr, subsurface scattering, global illumination, film-like, bokeh, cat beside Innertube At Pool; Negative prompt: NEGS Bad Image v1, NEGS Bad Prompt v2, NEGS Bad Hands, NEGS Bad Artist OG, NEGS Deep Negative v1.75T, lowres, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, bad proportions, amputee Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 341562910, Size: 448x640, Model hash: 7e708cce19, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Wildcard prompt: "realistic, highest quality, ((scifi)), lens flare, ((light sparkles)), digital painting, trending on ArtStation, trending on CGSociety, intricate, high detail, dramatic, realism, beautiful and detailed lighting, shadows, best quality, highly detailed, 8k, stunning, hdr, subsurface scattering, global illumination, film-like, bokeh, realism, cat <[pool]>; ", File includes: , Hires resize: 832x1216, Hires upscaler: Latent (nearest-exact)

I made the parts parsed and put into the "real" prompt by UmiAI at runtime as bold.

AlUlkesh commented 1 year ago

For me SendTo does not apply the following parameters anymore:

This is on a fresh install with no extensions. "When reading generation parameters..." is unchecked.

z:\AI\tests\stable-diffusion-webui_20230213>webui-user.bat
venv "z:\AI\tests\stable-diffusion-webui_20230213\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Commit hash: ea9bd9fc7409109adcd61b897abc2c8881161256
Installing requirements for Web UI
Launching Web UI with arguments: --xformers --medvram --administrator
Loading weights [812cd9f9d9] from Z:\AI\tests\stable-diffusion-webui_20230213\models\Stable-diffusion\Anything-V3.0-pruned-fp16.ckpt
Creating model from config: Z:\AI\tests\stable-diffusion-webui_20230213\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights found near the checkpoint: Z:\AI\tests\stable-diffusion-webui_20230213\models\Stable-diffusion\Anything-V3.0-pruned-fp16.vae.pt
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 2.4s (load weights from disk: 0.6s, create model: 0.3s, apply weights to model: 0.4s, apply half(): 0.6s, load VAE: 0.4s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
AlUlkesh commented 1 year ago

Perhaps I found the root of the problem, just not a solution yet.

This issue apparently first appeared with the "override" functionality: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/938578e8a94883aa3c0075cf47eea64f66119541

Sendto apparently does not fill the overrides, how it probably supposed to do.

If I take the gen-parameters shown in pnginfo and simply copy + paste them into the prompt field and press ↙, they do appear:

image

So the question is, how to make sendto behave like ↙

or to make it work like before without overrides, which I would prefer.

AlUlkesh commented 1 year ago

Since I didn't get an answer on what the intention was (#7803), I have to assume it is to use overrides. I come to this conclusion in the way things like Clip Skip are only available via this now.

So based on that, I developed this PR. Works fine on my machine with all pnginfo variations I could think of.

Once this is merged, I can also incorporate it into the images browser.

mart-hill commented 1 year ago

Since I didn't get an answer on what the intention was (#7803), I have to assume it is to use overrides. I come to this conclusion in the way things like Clip Skip are only available via this now.

So based on that, I developed this PR. Works fine on my machine with all pnginfo variations I could think of.

Once this is merged, I can also incorporate it into the images browser.

Thank you, your fix helps, in case of "Wildcard prompt" part presence, your fix simply omits it, and pastes the rest of the parameters (excluding changing model, but that's OK) the way it should be, right? 🙂

AlUlkesh commented 1 year ago

in case of "Wildcard prompt" part presence, your fix simply omits it, and pastes the rest of the parameters (excluding changing model, but that's OK) the way it should be, right?

I haven't used wildcards so far, how does that look in the pnginfo screen?

The model should be overridden, like so: image

This override is used instead of the normal model dropdown, until you hit the little x.

mart-hill commented 1 year ago

in case of "Wildcard prompt" part presence, your fix simply omits it, and pastes the rest of the parameters (excluding changing model, but that's OK) the way it should be, right?

I haven't used wildcards so far, how does that look in the pnginfo screen?

The model should be overridden, like so: image

This override is used instead of the normal model dropdown, until you hit the little x.

It looks like this:

realistic, highest quality, ((scifi)), lens flare, ((light sparkles)), digital painting, trending on ArtStation, trending on CGSociety, intricate, high detail, dramatic, realism, beautiful and detailed lighting, shadows, best quality, highly detailed, 8k, stunning, hdr, subsurface scattering, global illumination, film-like, bokeh, cat beside Innertube At Pool; <> Negative prompt: NEGS Bad Image v1, NEGS Bad Prompt v2, NEGS Bad Hands, NEGS Bad Artist OG, NEGS Deep Negative v1.75T, lowres, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, bad proportions, amputee Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 341562910, Size: 448x640, Model hash: 7e708cce19, Denoising strength: 0.75, Clip skip: 2, ENSD: 31337, Wildcard prompt: "realistic, highest quality, ((scifi)), lens flare, ((light sparkles)), digital painting, trending on ArtStation, trending on CGSociety, intricate, high detail, dramatic, realism, beautiful and detailed lighting, shadows, best quality, highly detailed, 8k, stunning, hdr, subsurface scattering, global illumination, film-like, bokeh, realism, cat <[pool]>; hypernet:LuisapMagiclight_v1:0.35", File includes: , Hires resize: 832x1216, Hires upscaler: Latent (nearest-exact)

I made the wildcard part bold. The word parsed by UmiAI is "pool". If I were to use such wildcard word in the negative prompt, it would probably be here, too.

About the model change - despite me having the option set to allow the "paste" function to change the model, it doesn't do it. obraz

AlUlkesh commented 1 year ago

Ok, I looked at the Wildcard thing. Yes, since there's no "Wildcard prompt" field in the UI it can't do anything with it. And the logic to replace the normal prompt with it would have to be handled by the wildcard extension.

About the model: Yes, that doesn't seem to work. Another reason why I am using overrides now to achieve the same.