AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
141.57k stars 26.75k forks source link

[Bug]: There's something going on with model choosing and loading - WebUI sometimes falls back to previous one used (new issue), or the first alphabetical one (older issue), intermittently. #7384

Closed mart-hill closed 1 year ago

mart-hill commented 1 year ago

Is there an existing issue for this?

What happened?

While changing the model, the UI seemes to load it, but at the moment of generating an image, it threw an error or communicated that it's falling back (cause of chosen model "not found", which isn't true), or simply just loading previous model.

Checkpoint model_114950_based_on_ratnikamix-v2.ckpt [4852cca101] not found; loading fallback ((f222+111)0.5)+(mdiffusionv2)0.35.safetensors [27cc90594f]
Loading weights [27cc90594f] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\((f222+111)0.5)+(mdiffusionv2)0.35.safetensors
Creating model from config: X:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
    await super().__call__(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
    response = await self.dispatch_func(request, call_next)
  File "X:\AI\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
    res: Response = await call_next(req)
  File X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
    raw_response = await run_endpoint_function(
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "X:\AI\stable-diffusion-webui\modules\progress.py", line 85, in progressapi
    shared.state.set_current_image()
  File "X:\AI\stable-diffusion-webui\modules\shared.py", line 241, in set_current_image
    self.do_set_current_image()
  File "X:\AI\stable-diffusion-webui\modules\shared.py", line 249, in do_set_current_image
    self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 135, in samples_to_image_grid
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 135, in <listcomp>
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 122, in single_sample_to_image
    x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
  File "X:\AI\stable-diffusion-webui\modules\processing.py", line 422, in decode_first_stage
    x = model.decode_first_stage(x)
AttributeError: 'NoneType' object has no attribute 'decode_first_stage'

Steps to reproduce the problem

  1. Try to choose a model.
  2. Wait a bit until it loads
  3. See in the log window, that the model loaded
  4. Press Generate
  5. WebUI immediately loads previously used model and generates an image off of it, OR says Checkpoint model_114950_based_on_ratnikamix-v2.ckpt [4852cca101] not found; loading fallback [first model, alphabetically, in the folder]. While the case when previously used model is loaded, next attempt to load the same model I wanted to use will throw the mentioned wall of errors up here.

What should have happened?

The model should stayed chosen and image should be generated using that model.

Commit where the problem happens

399720dac2543fb4cdbe1022ec1a01f2411b81e2

What platforms do you use to access the UI ?

Windows

What browsers do you use to access the UI ?

Microsoft Edge

Command Line Arguments

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set TMP=X:\AI\TEMP
set TEMP=X:\AI\TEMP
set SAFETENSORS_FAST_GPU=1
REM set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:24
set COMMANDLINE_ARGS=--xformers --api --deepdanbooru

List of extensions

ABG_extension | https://github.com/KutsuyaYuki/ABG_extension.git | Loading... DiffusionDefender | https://github.com/WildBanjos/DiffusionDefender.git | Loading... DreamArtist-sd-webui-extension | https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git | Loading... Hypernetwork-MonkeyPatch-Extension | https://github.com/aria1th/Hypernetwork-MonkeyPatch-Extension | Loading... PromptGallery-stable-diffusion-webui | https://github.com/dr413677671/PromptGallery-stable-diffusion-webui.git | Loading... SD-latent-mirroring | https://github.com/dfaker/SD-latent-mirroring | Loading... StylePile | https://github.com/some9000/StylePile | Loading... Umi-AI | https://github.com/Klokinator/Umi-AI | Loading... a1111-sd-webui-haku-img | https://github.com/KohakuBlueleaf/a1111-sd-webui-haku-img.git | Loading... a1111-sd-webui-tagcomplete | https://github.com/DominikDoom/a1111-sd-webui-tagcomplete | Loading... ~~asymmetric-tiling-sd-webui | https://github.com/tjm35/asymmetric-tiling-sd-webui.git | Loading...~~ booru2prompt | https://github.com/Malisius/booru2prompt.git | Loading... custom-diffusion-webui | https://github.com/guaneec/custom-diffusion-webui.git | Loading... ddetailer | https://github.com/dustysys/ddetailer.git | Loading... embedding-inspector | https://github.com/tkalayci71/embedding-inspector.git | Loading... model-keyword | https://github.com/mix1009/model-keyword | Loading... multi-subject-render | https://github.com/Extraltodeus/multi-subject-render.git | Loading... novelai-2-local-prompt | https://github.com/animerl/novelai-2-local-prompt | Loading... prompt-fusion-extension | https://github.com/ljleb/prompt-fusion-extension.git | Loading... sd-dynamic-prompts | https://github.com/adieyal/sd-dynamic-prompts | Loading... ~~sd-extension-steps-animation | https://github.com/vladmandic/sd-extension-steps-animation | Loading...~~ sd-extension-system-info | https://github.com/vladmandic/sd-extension-system-info | Loading... sd-infinity-grid-generator-script | https://github.com/mcmonkeyprojects/sd-infinity-grid-generator-script.git | Loading... sd-webui-additional-networks | https://github.com/kohya-ss/sd-webui-additional-networks.git | Loading... sd-webui-gelbooru-prompt | https://github.com/antis0007/sd-webui-gelbooru-prompt.git | Loading... sd-webui-model-converter | https://github.com/Akegarasu/sd-webui-model-converter | Loading... sd-webui-multiple-hypernetworks | https://github.com/antis0007/sd-webui-multiple-hypernetworks.git | Loading... sd_dreambooth_extension | https://github.com/d8ahazard/sd_dreambooth_extension | Loading... sd_save_intermediate_images | https://github.com/AlUlkesh/sd_save_intermediate_images | Loading... sdweb-merge-block-weighted-gui | https://github.com/bbc-mc/sdweb-merge-block-weighted-gui | Loading... sdweb-merge-board | https://github.com/bbc-mc/sdweb-merge-board.git | Loading... seed_travel | https://github.com/yownas/seed_travel.git | Loading... shift-attention | https://github.com/yownas/shift-attention.git | Loading... stable-diffusion-webui-Prompt_Generator | https://github.com/imrayya/stable-diffusion-webui-Prompt_Generator | Loading... stable-diffusion-webui-aesthetic-gradients | https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients | Loading... stable-diffusion-webui-aesthetic-image-scorer | https://github.com/tsngo/stable-diffusion-webui-aesthetic-image-scorer | Loading... stable-diffusion-webui-artists-to-study | https://github.com/camenduru/stable-diffusion-webui-artists-to-study | Loading... stable-diffusion-webui-cafe-aesthetic | https://github.com/p1atdev/stable-diffusion-webui-cafe-aesthetic.git | Loading... stable-diffusion-webui-conditioning-highres-fix | https://github.com/klimaleksus/stable-diffusion-webui-conditioning-highres-fix.git | Loading... stable-diffusion-webui-daam | https://github.com/kousw/stable-diffusion-webui-daam.git | Loading... stable-diffusion-webui-dataset-tag-editor | https://github.com/toshiaki1729/stable-diffusion-webui-dataset-tag-editor | Loading... stable-diffusion-webui-embedding-editor | https://github.com/CodeExplode/stable-diffusion-webui-embedding-editor.git | Loading... stable-diffusion-webui-images-browser | https://github.com/yfszzx/stable-diffusion-webui-images-browser | Loading... stable-diffusion-webui-inspiration | https://github.com/yfszzx/stable-diffusion-webui-inspiration | Loading... stable-diffusion-webui-instruct-pix2pix | https://github.com/Klace/stable-diffusion-webui-instruct-pix2pix.git | Loading... stable-diffusion-webui-pixelization | https://github.com/AUTOMATIC1111/stable-diffusion-webui-pixelization.git | Loading... stable-diffusion-webui-prompt-travel | https://github.com/Kahsolt/stable-diffusion-webui-prompt-travel.git | Loading... stable-diffusion-webui-promptgen | https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen | Loading... stable-diffusion-webui-randomize | https://github.com/innightwolfsleep/stable-diffusion-webui-randomize | Loading... stable-diffusion-webui-sonar | https://github.com/Kahsolt/stable-diffusion-webui-sonar | Loading... stable-diffusion-webui-tokenizer | https://github.com/AUTOMATIC1111/stable-diffusion-webui-tokenizer.git | Loading... stable-diffusion-webui-visualize-cross-attention-extension | https://github.com/benkyoujouzu/stable-diffusion-webui-visualize-cross-attention-extension.git | Loading... stable-diffusion-webui-wd14-tagger | https://github.com/toriato/stable-diffusion-webui-wd14-tagger.git | Loading... ~~stable-diffusion-webui-wildcards | https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards | Loading...~~ training-picker | https://github.com/Maurdekye/training-picker | Loading... ultimate-upscale-for-automatic1111 | https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git | Loading... ~~unprompted | https://github.com/ThereforeGames/unprompted | Loading...~~ LDSR | built-in |   Lora | built-in |   ScuNET | built-in |   SwinIR | built-in |   prompt-bracket-checker | built-in

Console logs

PS X:\AI\stable-diffusion-webui> .\webui-user.bat
venv "X:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Commit hash: 399720dac2543fb4cdbe1022ec1a01f2411b81e2
Installing requirements for Web UI
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover
Installing requirements for Anime Background Remover

Installing requirements for scikit_learn

Installing requirements for Prompt Gallery

Installing sd-dynamic-prompts requirements.txt

#######################################################################################################
Initializing Dreambooth
If submitting an issue on github, please provide the below text for debugging purposes:

Python revision: 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Dreambooth revision: 9f4d931a319056c537d24669cb950d146d1537b0
SD-WebUI revision: 399720dac2543fb4cdbe1022ec1a01f2411b81e2

Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[+] xformers version 0.0.16rc425 installed.
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.

#######################################################################################################

Installing requirements for dataset-tag-editor [onnxruntime-gpu]

Launching Web UI with arguments: --xformers --api --deepdanbooru
Loading booru2prompt settings
[AddNet] Updating model hashes...
0it [00:00, ?it/s]
Hypernetwork-MonkeyPatch-Extension found!
SD-Webui API layer loaded
Installing pywin32
Error loading script: training_picker.py
Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "X:\AI\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "X:\AI\stable-diffusion-webui\extensions\training-picker\scripts\training_picker.py", line 16, in <module>
    from modules.ui import create_refresh_button, folder_symbol
ImportError: cannot import name 'folder_symbol' from 'modules.ui' (X:\AI\stable-diffusion-webui\modules\ui.py)

Loading weights [57b47348c5] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_10000.ckpt
Creating model from config: X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_10000.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(xxx): 1man, 2000ccplus, 3N1DS1NCL41R , 80s-anime-ai-being, 80s-anime-ai, 80s-car, albino_style, andava, ao_style-7500, ao_style, art by Smoose2, B4R0N, B4R0N22, bad-artist-anime, bad-artist...
Textual inversion embeddings skipped(xxx): AnalogFilm768-BW-Classic, AnalogFilm768-BW-Modern, AnalogFilm768-BW-Tintype, AnalogFilm768-BW-Vintage, AnalogFilm768-Old-School, AnalogFilm768, Apoc768, Art by Smoose-22, art by Smoose22, Cinema768-Analog, Cinema768-BW, Cinema768-Classic, Cinema768-Digital, Cinema768-SilentFilm, classipeint, DaveSpaceFour, DaveSpaceOne, dblx768, DrD_PNTE768, EMB-SD21_Black_Marble_Style_V5-2000...
Model loaded in 6.4s (load weights from disk: 1.0s, create model: 0.6s, apply weights to model: 0.5s, apply half(): 0.7s, load VAE: 0.1s, move model to device: 1.2s, load textual inversion embeddings: 2.3s).
patched in extra network ui page: deltas
patched in extra network: deltas
Textual inversion embeddings loaded(0):
INFO:     Started server process [34800]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://localhost:5173 (Press CTRL+C to quit)
INFO:     ::1:64537 - "GET / HTTP/1.1" 200 OK
add tab
Running on local URL:  http://127.0.0.1:7860

Loading weights [57b47348c5] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_10000.ckpt
Creating model from config: X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_10000.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui\models\VAE\Anything-V3.0.vae.pt
Applying xformers cross attention optimization.
Model loaded in 4.6s (create model: 0.6s, apply weights to model: 0.4s, apply half(): 0.7s, load VAE: 0.6s, move model to device: 1.2s, load textual inversion embeddings: 1.0s).
Prompt generated in 0.0 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00,  2.68it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00,  1.49it/s]
Loading weights [5b2986ae4b] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_12000.ckpt2it/s]
Creating model from config: X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_12000.yaml
LatentDiffusion: Running in eps-prediction mode
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
    await super().__call__(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
    response = await self.dispatch_func(request, call_next)
  File "X:\AI\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
    res: Response = await call_next(req)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
    raw_response = await run_endpoint_function(
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "X:\AI\stable-diffusion-webui\modules\progress.py", line 85, in progressapi
    shared.state.set_current_image()
  File "X:\AI\stable-diffusion-webui\modules\shared.py", line 241, in set_current_image
    self.do_set_current_image()
  File "X:\AI\stable-diffusion-webui\modules\shared.py", line 249, in do_set_current_image
    self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 135, in samples_to_image_grid
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 135, in <listcomp>
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 122, in single_sample_to_image
    x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
  File "X:\AI\stable-diffusion-webui\modules\processing.py", line 422, in decode_first_stage
    x = model.decode_first_stage(x)
  File "X:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "X:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "X:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 826, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "X:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 89, in decode
    z = self.post_quant_conv(z)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "X:\AI\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_forward
    return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input))
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui\models\VAE\Anything-V3.0.vae.pt
Applying xformers cross attention optimization.
Model loaded in 4.5s (create model: 0.6s, apply weights to model: 0.5s, apply half(): 0.7s, load VAE: 0.6s, move model to device: 1.0s, load textual inversion embeddings: 1.1s).
Total progress: 100%|██████████████████████████████████████████████████████████████████| 40/40 [00:28<00:00,  1.38it/s]
Calculating sha256 for X:\AI\stable-diffusion-webui\models\Stable-diffusion\model_114950_based_on_ratnikamix-v2.ckpt: 4852cca1015dba0919bf75bb7a36bed2887ab9acaa2fd72e7b371eb908afe81b
Loading weights [4852cca101] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\model_114950_based_on_ratnikamix-v2.ckpt
Creating model from config: X:\AI\stable-diffusion-webui\models\Stable-diffusion\model_114950_based_on_ratnikamix-v2.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui\models\VAE\Anything-V3.0.vae.pt
Applying xformers cross attention optimization.
Model loaded in 4.7s (create model: 0.6s, apply weights to model: 0.4s, apply half(): 0.8s, load VAE: 0.5s, move model to device: 1.2s, load textual inversion embeddings: 1.0s).
Loading weights [57b47348c5] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_10000.ckpt
Creating model from config: X:\AI\stable-diffusion-webui\models\Stable-diffusion\wtf4\wtf4_10000.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui\models\VAE\Anything-V3.0.vae.pt
Applying xformers cross attention optimization.
Model loaded in 4.4s (create model: 0.5s, apply weights to model: 0.4s, apply half(): 0.7s, load VAE: 0.6s, move model to device: 1.1s, load textual inversion embeddings: 1.0s).
Prompt generated in 0.0 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:07<00:00,  2.55it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00,  1.48it/s]
Checkpoint model_114950_based_on_ratnikamix-v2.ckpt [4852cca101] not found; loading fallback ((f222+111)0.5)+(mdiffusionv2)0.35.safetensors [27cc90594f]
Loading weights [27cc90594f] from X:\AI\stable-diffusion-webui\models\Stable-diffusion\((f222+111)0.5)+(mdiffusionv2)0.35.safetensors
Creating model from config: X:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
    return self.receive_nowait()
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
    message = await recv_stream.receive()
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
    await super().__call__(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
    await self.middleware_stack(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
    response = await self.dispatch_func(request, call_next)
  File "X:\AI\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
    res: Response = await call_next(req)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
    raise app_exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
    await responder(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
    await self.app(scope, receive, self.send_with_gzip)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
    await route.handle(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
    raw_response = await run_endpoint_function(
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "X:\AI\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "X:\AI\stable-diffusion-webui\modules\progress.py", line 85, in progressapi
    shared.state.set_current_image()
  File "X:\AI\stable-diffusion-webui\modules\shared.py", line 241, in set_current_image
    self.do_set_current_image()
  File "X:\AI\stable-diffusion-webui\modules\shared.py", line 249, in do_set_current_image
    self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent))
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 135, in samples_to_image_grid
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 135, in <listcomp>
    return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples])
  File "X:\AI\stable-diffusion-webui\modules\sd_samplers.py", line 122, in single_sample_to_image
    x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0]
  File "X:\AI\stable-diffusion-webui\modules\processing.py", line 422, in decode_first_stage
    x = model.decode_first_stage(x)
AttributeError: 'NoneType' object has no attribute 'decode_first_stage'
Loading VAE weights specified in settings: X:\AI\stable-diffusion-webui\models\VAE\Anything-V3.0.vae.pt
Applying xformers cross attention optimization.
Model loaded in 16.9s (create model: 0.5s, apply weights to model: 12.6s, apply half(): 0.8s, load VAE: 0.7s, move model to device: 1.2s, load textual inversion embeddings: 1.0s).
Total progress: 100%|██████████████████████████████████████████████████████████████████| 40/40 [00:41<00:00,  1.05s/it]

Additional information

This issue with WebUI intermittently falling back to first model alphabetically exists since the change how hashes are calculated, I think.

I was using Use old karras scheduler sigmas (0.1 to 10). option at the moment of that error wall, I think I should mention that. The issue with model falling back happens intermittently, no matter the state of the option.

mart-hill commented 1 year ago

I still have some v1 .yaml files next to some (SD 1.5) models (or the ones I trained in dreambooth). Could that be messing with WebUI's new approach towards .yaml config files?

Edit: I'm unable to choose any other model, than the one I started WebUI with - the UI loads it back (and generates the image using it) every time I press "Generate". Of course, with the error wall.🙂

2nd edit: Even after fully WebUI restart, I'm unable to generate the image with any other model, that the one WebUI strongly decided to "attach itself" to. It would be this _wtf410000.ckpt model. I used "Paste" button though, just after UI full restart. Curiously, the model list selector stays on the model I'd like to use - the list didn't switch to wtf4 model file. It's impossible for me to choose another model after pressing Generate (at least after using "Paste" from previous session) in c81b52ffbd6252842b3473a7aa8eb7ffc88ee7d1 commit - the UI immediately loads the wtf4 model, even if previous session ended with another model being chosen before session ended (and saved the chosen model).

Funnily enough, after generating the image on the stuck model, UI reloads the model I've chosen as next desired, like it's nothing - what's going on? 🙂 I'll try this option next: image

Didn't help. Full restart of the UI and next test. 🙂 I think the issue is with Paste from the params.txt file - this file still holds the hash from the "stuck" model, and the UI refuses to yield now. Oh, the model I'm fighting with (with the 12k steps, successor to wtf4 10k one, seems to show A tensor with all NaNs was produced in VAE. now, which probably means something broke; weird, both models were able to generate an image before). Still, even choosing a correct model causes this "I don't wanna change your model!" behavior. Or NovelAI VAE just fails sometimes and needs --no-half-vae parameter? WebUI restart did help, the "model stuck" won't happen unless I use "Paste" from previous session, I think.

f-rank commented 1 year ago

Also getting this. It was loading another model when I pressed generate. Then loading the previous model when generation ended. It also stopped loading the correct model from metadata in images dragged over to the prompt.

Having "When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint." on or off, doesn't matter. It's got a mind of it's own.

Had to : git checkout aa6e55e00140da6d73d3d360a5628c1b1316550d and it all works there.

busyfree commented 1 year ago

have same problem.

mudashi33 commented 1 year ago
  1. Manually switch models
  2. Use the "Paste"button
  3. Turn off the "Override Settings" parameter below

Snipaste_2023-01-30_15-31-49 Snipaste_2023-01-30_15-32-38

f-rank commented 1 year ago
  1. Manually switch models

    1. Use the "Paste"button

    2. Turn off the "Override Settings" parameter below

That doesn't fix it. Were you having the same/similar problem and it fixed it for you ?

mudashi33 commented 1 year ago

@f-rank I had the same problem and solved it. You can restart the sd webui and try the method again

`Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Checkpoint 2836f3a5 not found; loading fallback AbyssOrangeMix2_hard.safetensors [0fc1 98c490]rogress: 0it [00:00, ?it/s] Loading weights [0fc198c490] from E:\Stable Diffusion\NovelAI StableDiffusion WebUI\models\Stable-diffusion\AbyssOrangeM ix2hard.safetensors Applying xformers cross attention optimization. Weights loaded in 1.4s (apply weights to model: 0.8s, move model to device: 0.5s). 0%| | 0/30 [00:00<?, ?it/s] 3%|██▊ | 1/30 [00:01<00:56, 1.94s/i t] 7%|█████▌ | 2/30 [00:02<00:25, 1.09 it/s] 10%|████████▎ | 3/30 [00:02<00:15, 1 .70it/s] 13%|███████████ | 4/30 [00:02<00:13, 1.87it/s] 17%|█████████████▊ | 5/30 [00:02<00:1 0, 2.43it/s] 20%|████████████████▌ | 6/30 [00:03<0 0:08, 2.96it/s] 23%|███████████████████▎ | 7/30 [00:0 3<00:06, 3.42it/s] 27%|██████████████████████▏ | 8/30 [0 0:03<00:05, 3.82it/s] 30%|████████████████████████▉ | 9/30 [00:03<00:05, 4.15it/s] 33%|███████████████████████████▎ | 10/ 30 [00:03<00:04, 4.41it/s] 37%|██████████████████████████████ | 1 1/30 [00:04<00:04, 4.43it/s] 40%|████████████████████████████████▊ | 12/30 [00:04<00:05, 3.52it/s] 43%|███████████████████████████████████▌ | 13/30 [00:04<00:04, 3.88it/s] 47%|██████████████████████████████████████▎ | 14/30 [00:04<00:03, 4.19it/s] 50%|█████████████████████████████████████████ | 15/30 [00:05<00:03, 4.44it/s] 53%|███████████████████████████████████████████▋ | 16/30 [00:05<00:03, 4.64it/s] 57%|██████████████████████████████████████████████▍ | 17/30 [00:05<00:02, 4.79it/s] 60%|█████████████████████████████████████████████████▏ | 18/30 [00:05<00:02, 4.87it/s] 63%|███████████████████████████████████████████████████▉ | 19/30 [00:06<00:03, 3.58it/s] 67%|██████████████████████████████████████████████████████▋ | 20/30 [00:06<00:02, 3.93it/s] 70%|█████████████████████████████████████████████████████████0 ▍ | 21/30 [00:06<00:02, 4.18it/s] 73%|█████████████████████████████████████████████████████████ ███▏ | 22/30 [00:06<00:01, 4.38it/s] 77%|█████████████████████████████████████████████████████████0 █████▊ | 23/30 [00:07<00:01, 4.31it/s] 80%|█████████████████████████████████████████████████████████6 ████████▌ | 24/30 [00:07<00:01, 4.51it/s] 83%|█████████████████████████████████████████████████████████/ ███████████▎ | 25/30 [00:07<00:01, 4.63it/s] 87%|█████████████████████████████████████████████████████████6 ██████████████ | 26/30 [00:07<00:01, 3.51it/s] 90%|█████████████████████████████████████████████████████████2 ████████████████▊ | 27/30 [00:08<00:00, 3.88it/s] 93%|█████████████████████████████████████████████████████████ ███████████████████▌ | 28/30 [00:08<00:00, 4.18it/s] 97%|█████████████████████████████████████████████████████████| ██████████████████████▎ | 29/30 [00:08<00:00, 4.43it/s] 100%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ █████████████████████████| 30/30 [00:08<00:00, 3.45it/s] Loading CLiP model ViT-L/14 Tile 1/12 Tile 2/12 Tile 3/12 Tile 4/12 Tile 5/12 Tile 6/12 Tile 7/12 Tile 8/12 Tile 9/12 Tile 10/12 Tile 11/12 Tile 12/12 0%| | 0/30 [00:00<?, ?it/s] 3%|██▊ | 1/30 [00:01<00:53, 1.84s/i t] 31/60 [00:47<02:16, 4.71s/it] 7%|█████▌ | 2/30 [00:03<00:49, 1.76 s/it]32/60 [00:49<01:46, 3.80s/it] 10%|████████▎ | 3/30 [00:05<00:54, 2 .03s/it]/60 [00:51<01:31, 3.37s/it] 13%|███████████ | 4/30 [00:08<00:58, 2.24s/it]60 [00:54<01:21, 3.13s/it] 17%|█████████████▊ | 5/30 [00:09<00:4 9, 1.97s/it] [00:55<01:05, 2.64s/it] 20%|████████████████▌ | 6/30 [00:12<0 0:52, 2.17s/it]00:58<01:02, 2.61s/it] 23%|███████████████████▎ | 7/30 [00:1 3<00:44, 1.95s/it]:59<00:52, 2.28s/it] 27%|██████████████████████▏ | 8/30 [0 0:16<00:47, 2.14s/it]2<00:51, 2.36s/it] 30%|████████████████████████▉ | 9/30 [00:19<00:47, 2.28s/it]<00:51, 2.43s/it] 33%|███████████████████████████▎ | 10/ 30 [00:20<00:40, 2.04s/it]0:43, 2.15s/it] 37%|██████████████████████████████ | 1 1/30 [00:23<00:41, 2.20s/it]:43, 2.28s/it] 40%|████████████████████████████████▊ | 12/30 [00:24<00:35, 1.99s/it]36, 2.04s/it] 43%|███████████████████████████████████▌ | 13/30 [00:27<00:36, 2.16s/it], 2.20s/it] 47%|██████████████████████████████████████▎ | 14/30 [00:29<00:36, 2.28s/it] 2.31s/it] 50%|█████████████████████████████████████████ | 15/30 [00:31<00:30, 2.04s/it] 2.06s/it] 53%|███████████████████████████████████████████▋ | 16/30 [00:33<00:30, 2.20s/it].22s/it] 57%|██████████████████████████████████████████████▍ | 17/30 [00:35<00:26, 2.02s/it]0s/it] 60%|█████████████████████████████████████████████████▏ | 18/30 [00:37<00:25, 2.16s/it]/it] 63%|███████████████████████████████████████████████████▉ | 19/30 [00:40<00:25, 2.28s/it]it] 67%|██████████████████████████████████████████████████████▋ | 20/30 [00:42<00:20, 2.05s/it]] 70%|█████████████████████████████████████████████████████████ ▍ | 21/30 [00:44<00:19, 2.21s/it] 73%|█████████████████████████████████████████████████████████ ███▏ | 22/30 [00:46<00:15, 2.00s/it] 77%|█████████████████████████████████████████████████████████ █████▊ | 23/30 [00:48<00:15, 2.16s/it] 80%|█████████████████████████████████████████████████████████ ████████▌ | 24/30 [00:51<00:13, 2.29s/it] 83%|█████████████████████████████████████████████████████████ ███████████▎ | 25/30 [00:52<00:10, 2.05s/it] 87%|█████████████████████████████████████████████████████████ ██████████████ | 26/30 [00:55<00:08, 2.21s/it] 90%|█████████████████████████████████████████████████████████ ████████████████▊ | 27/30 [00:56<00:06, 2.03s/it] 93%|█████████████████████████████████████████████████████████ ███████████████████▌ | 28/30 [00:59<00:04, 2.16s/it] 97%|█████████████████████████████████████████████████████████ ██████████████████████▎ | 29/30 [01:01<00:02, 2.28s/it] 100%|█████████████████████████████████████████████████████████ 100%|█████████████████████████████████████████████████████████ █████████████████████████| 30/30 [01:03<00:00, 2.12s/it] Loading weights [13dfc9921f] from E:\Stable Diffusion\NovelAI StableDiffusion WebUI\models\Stable-diffusion\dreamshaper 332BakedVaeClipFix.safetensors ERROR: Exception in ASGI application Traceback (most recent call last): File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\fastapi\applications.py", line 270, i n call await super().call(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\applications.py", line 124, in call await self.middleware_stack(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\middleware\gzip.py", line 2 4, in call await responder(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\middleware\gzip.py", line 4 3, in call await self.app(scope, receive, self.send_withgzip) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\fastapi\middleware\asyncexitstack.py" , line 21, in call raise e File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\fastapi\middleware\asyncexitstack.py" , line 18, in call await self.app(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\routing.py", line 706, in _call__ await route.handle(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\routing.py", line 276, in h andle await self.app(scope, receive, send) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\routing.py", line 66, in ap p response = await func(request) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\fastapi\routing.py", line 235, in app

raw_response = await run_endpoint_function(

File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\fastapi\routing.py", line 163, in run _endpoint_function return await run_in_threadpool(dependant.call, values) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\starlette\concurrency.py", line 41, i n run_in_threadpool return await anyio.to_thread.run_sync(func, args) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\anyio\tothread.py", line 31, in run sync return await get_asynclib().run_sync_in_worker_thread( File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\anyio_backends_asyncio.py", line 93 7, in run_sync_in_worker_thread return await future File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\anyio_backends_asyncio.py", line 86 7, in run result = context.run(func, args) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\progress.py", line 85, in progressapi shared.state.set_current_image() File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\shared.py", line 241, in set_current_image self.do_set_current_image() File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\shared.py", line 249, in do_set_current_image self.assign_current_image(modules.sd_samplers.samples_to_image_grid(self.current_latent)) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\sd_samplers_common.py", line 51, in samples_to_image_g rid return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples]) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\sd_samplers_common.py", line 51, in return images.image_grid([single_sample_to_image(sample, approximation) for sample in samples]) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\sd_samplers_common.py", line 38, in single_sample_to_i mage x_sample = processing.decode_first_stage(shared.sd_model, sample.unsqueeze(0))[0] File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\processing.py", line 422, in decode_first_stage x = model.decode_first_stage(x) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\sd_hijack_utils.py", line 17, in setattr(resolved_obj, func_path[-1], lambda *args, *kwargs: self(args, kwargs)) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\modules\sd_hijack_utils.py", line 28, in call return self.__orig_func(*args, kwargs) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\torch\autograd\grad_mode.py", line 27 , in decorate_context return func(*args, *kwargs) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\diffusio n\ddpm.py", line 826, in decode_first_stage return self.first_stage_model.decode(z) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\repositories\stable-diffusion-stability-ai\ldm\models\autoenco der.py", line 89, in decode z = self.post_quant_conv(z) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\torch\nn\modules\module.py", line 113 0, in _call_impl return forward_call(input, kwargs) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\extensions-builtin\Lora\lora.py", line 182, in lora_Conv2d_for ward return lora_forward(self, input, torch.nn.Conv2d_forward_before_lora(self, input)) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "E:\Stable Diffusion\NovelAI StableDiffusion WebUI\python\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same Applying xformers cross attention optimization. Weights loaded in 1.7s (apply weights to model: 1.1s, move model to device: 0.5s). Total progress: 100%|█████████████████████████████████████████████████ █████████████████| 60/60 [01:54<00:00, 1.92s/it]`

f-rank commented 1 year ago

@mudashi33

Well, whenever I drag a file to the prompt area and it presents the generation details, it isn't loading the proper ckpt/safetensor and none of what I read in your response seems to fix it. Do you mean I have to load the ckpt/safetensor manually every time ? That's the thing I want to avoid, it should do it automatically. Either that or I am missing something obvious in your response.

Does "Turn off the "Override Settings" parameter below" mean pressing the x on it ? Because that just made the info disappear and next image dragged also didn't load correct weight file.

This seems kind of backwards actually, is it possible to just have the weight file get loaded straight and not have this model swap.... what even is this and why would this be needed. Isn't this just adding time to generation.

mart-hill commented 1 year ago

This "Paste" from params.txt file (by using this button)

image

bug still exists in 2c1bb46 commit. Workaround is not to use it for now, otherwise WebUI will reload the model present in params.txt file every time user chooses another one from the drop-down list, and pressed "Generate".

VictorJulianiR commented 1 year ago

same behavior here, hope it get's fixed

mudashi33 commented 1 year ago

@f-rank In the current version, there is no way to load the model correctly when you drag the file to the prompt area and show the build details, so you need to manually load the correct model and click the x in front of "model hash" under "Overwrite Settings" so there is no error.

This is a bug in the "paste parameter" button that needs to be fixed by the official.

I've always switched models manually, and I don't change models very often unless I need to test a model because it loads too slowly

Snipaste_2023-01-31_08-46-02

f-rank commented 1 year ago

@mudashi33 Damn shame because it's integral to the way I interact with it. Will just sit tight on the last working commit without the bug until it is resolved.

imacopypaster commented 1 year ago

@mudashi33 Will just sit tight on the last working commit

Can you tell me which commit exactly?

busyfree commented 1 year ago

update to latest commit(commit id is 2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d ), open two tab access the server, latest user change the model will overwrite the before user selected model.

f-rank commented 1 year ago

@imacopypaster

I reverted to aa6e55e00140da6d73d3d360a5628c1b1316550d , the commit I had before updating to another and having all the ckpt unpleasantness.

medledan commented 1 year ago

I am also having this weird issue. Sometimes the model looks like it loads, I get a weird image generation, like what model is this. Load up the cmd prompt and see the model failed and revered to another one. Also, the model name is no longer showing in the field up top left. Its blank half the time.

Dravoss commented 1 year ago

I have the sane problem. In my case its always the same; it looks like it loads the new model, then when i try to generate an image it loads the previous model and generates the image, sometimes closing the console and loading WebUI again let me load the new model but it randomly fails again when swapping models again.

medledan commented 1 year ago

This just happened again. I did an XYZ plot on the loaded checkpoint. Worked just fine. Made an adjustment using the exact same loaded checkpoint...and it "lost it" and revered to another one....

Checkpoint hentia\Fruity Mix.ckpt [ce989b9bf6] not found; loading fallback 2d anime\Anything-V3.0-pruned-fp32.ckpt [67a115286b] Loading weights [67a115286b] from E:\stable-diffusion-webui-current\models\Stable-diffusion\2d anime\Anything-V3.0-pruned-fp32.ckpt Loading VAE weights specified in settings: E:\stable-diffusion-webui-current\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt Applying xformers cross attention optimization. Weights loaded in 29.6s (load weights from disk: 28.7s, apply weights to model: 0.2s, load VAE: 0.2s, move model to device: 0.5s). Total progress: 300it [03:36, 1.39it/s] X/Y/Z plot will create 9 images on 1 9x1 grid. (Total steps to process: 540)

medledan commented 1 year ago

I have the sane problem. In my case its always the same; it looks like it loads the new model, then when i try to generate an image it loads the previous model and generates the image, sometimes closing the console and loading WebUI again let me load the new model but it randomly fails again when swapping models again.

The only way I know it happened is when I get a completely new style.

Dravoss commented 1 year ago

This "Paste" from params.txt file (by using this button) image bug still exists in 2c1bb46 commit. Workaround is not to use it for now, otherwise WebUI will reload the model present in params.txt file every time user chooses another one from the drop-down list, and pressed "Generate".

it didn't work, i avoided using that button but still got the bug. To add more info the bug changes the model when i try to generate an image and if i interrupt the generation it fakes going back to the chosen model but not generating the image oc, and failing if i try again.

giteeeeee commented 1 year ago

I think I found the cause:

  1. Load a model without hash ie. a model that hasn't been loaded before
  2. Stable diffusion calculates hash
  3. Load any other model
  4. Load the first model again. Then the model not found, loading fallback to whichever checkpoint first in the alphabet
  5. Restart webui, the model will then load

New Project

Basically: Any model without hash will load. Any model with hash freshly calculated on the current webui instance won't load. Any model with hash calculated not on the current webui instance will load.

mezotaken commented 1 year ago

3e0f9a75438fa815429b5530261bcf7d80f3f101 Check if it's working as intended after the latest commit.

medledan commented 1 year ago

3e0f9a7 Check if it's working as intended after the latest commit.

Nope, been happening all the time. Also if I use an XYZ plot, and I it interrupt because I messed something up and want to cancel, it looses the model it just had loaded and reverts back to 'fallback", saying model is not found.

edit: Just realized I am NOT on the latest commit. Let me go test it.

Dravoss commented 1 year ago

3e0f9a7 Check if it's working as intended after the latest commit.

I have been using WebUI 40e51fd commit for hours and had zero issues. just updated to 3e0f9a7 and first model swap and it failed again. i am going to revert to check if it is a coincidence.

Dravoss commented 1 year ago

after testing a bit more i think Giteeeeee was right in https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7384#issuecomment-1414895103 the bug happens when a new model has no hash and after rebooting when a swap fails if you reload the previous generation with blue check button

busyfree commented 1 year ago

test on latest commit, still have the problem when open two session page, new session page change the model will overwrite the before session page model selected. see the screenshot at below.

image

update to latest commit(commit id is ea9bd9fc ), open two tab access the server, latest user change the model will overwrite the before user selected model.

djdookie commented 1 year ago

I also have the problem after I read generation parameters from an image, I can't switch models anymore. After hitting generate the same model used to generate the image is always used.

djdookie commented 1 year ago

I just found out that you prevent the bug and enable model switching again, if you check the following option in settings->user interface: When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint.

pwatx commented 1 year ago

I used this issue fix and problem was already done.

https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/3e0f9a75438fa815429b5530261bcf7d80f3f101

djdookie commented 1 year ago

When I disable the following option, I still can't switch the models but get a fallback on some model. When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint. (settings->user interface)

medledan commented 1 year ago

When I disable the following option, I still can't switch the models but get a fallback on some model. When reading generation parameters from text into UI (from PNG info or pasted text), do not change the selected model/checkpoint. (settings->user interface)

Have you updated your Automatic1111? This issue was fixed a month ago with a commit.

djdookie commented 1 year ago

Yes, current version

WellTung666 commented 1 year ago

test on latest commit, still have the problem when open two session page, new session page change the model will overwrite the before session page model selected. see the screenshot at below.

image

update to latest commit(commit id is ea9bd9fc ), open two tab access the server, latest user change the model will overwrite the before user selected model.

I also have the same problem, did you solve it?