lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.6k stars 850 forks source link

[Bug]: ControlNet - ERROR - Recognizing Control Model failed #170

Open Mracobes9 opened 9 months ago

Mracobes9 commented 9 months ago

Checklist

What happened?

I tried inpaint image with controlnetT2IAdapter_t2iAdapterStyle. But i has a error: Recognizing Control Model failed: ~/stable-diffusion-webui-forge/models/ControlNet/controlnetT2IAdapter_t2iAdapterStyle.safetensors

Steps to reproduce the problem

1) Dowload controlnetT2IAdapter_t2iAdapterStyle.safetensors from civilai 2) place it in ~/stable-diffusion-webui-forge/models/ControlNet 3) Go to Inpaint page of webui 4) Choose image and inpaint mask 5) Setup Controlnet Unit: ControlType - T2I-adapter, model - controlnetT2IAdapter_t2iAdapterStyle.safetensors 6) Click "Generate"

What should have happened?

Inpainting complete without errors

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-02-10-14-54.json

Console logs

Using TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is not linked with libpthreadand will trigger undefined symbol: ptthread_Key_Create error
Using TCMalloc: libtcmalloc.so.4
libtcmalloc.so.4 is not linked with libpthreadand will trigger undefined symbol: ptthread_Key_Create error
Python 3.11.6 (main, Nov 14 2023, 09:36:21) [GCC 13.2.1 20230801]
Version: f0.0.12-latest-110-g15bb49e7
Commit hash: 15bb49e761e837c0a3463a736762d11941ea69f7
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: --update-all-extensions --listen --always-gpu
Total VRAM 4039 MB, total RAM 15880 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram
Set vram state to: LOW_VRAM
Device: cuda:0 NVIDIA GeForce GTX 1050 Ti : native
VAE dtype: torch.float32
Using pytorch cross attention
ControlNet preprocessor location: /home/mracobes/stable_diffusion/stable-diffusion-webui-forge/models/ControlNetPreprocessor
Loading weights [35937afca8] from /home/mracobes/stable_diffusion/stable-diffusion-webui-forge/models/Stable-diffusion/lazymixRealAmateur_v40Inpainting.safetensors
2024-02-10 17:51:36,288 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.6s (prepare environment: 2.9s, import torch: 3.7s, import gradio: 1.2s, setup paths: 1.0s, other imports: 0.7s, setup gfpgan: 0.2s, load scripts: 1.7s, create ui: 0.6s, gradio launch: 0.4s).
model_type EPS
UNet ADM Dimension 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
To load target model SD1ClipModel
Begin to load 1 model
Model loaded in 6.4s (load weights from disk: 1.1s, forge load real models: 4.7s, load VAE: 0.1s, calculate empty prompt: 0.5s).
2024-02-10 17:56:27,926 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-02-10 17:56:27,929 - ControlNet - INFO - Using preprocessor: t2ia_sketch_pidi
2024-02-10 17:56:27,929 - ControlNet - INFO - preprocessor resolution = 512
Automatic Memory Management: 128 Modules in 0.02 seconds.
2024-02-10 17:56:28,713 - ControlNet - ERROR - Recognizing Control Model failed: /home/mracobes/stable_diffusion/stable-diffusion-webui-forge/models/ControlNet/controlnetT2IAdapter_t2iAdapterStyle.safetensors
*** Error running process: /home/mracobes/stable_diffusion/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py
Traceback (most recent call last):
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/modules/scripts.py", line 798, in process
script.process(p, *script_args)
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py", line 526, in process
self.process_unit_after_click_generate(p, unit, params, *args, **kwargs)
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py", line 389, in process_unit_after_click_generate
assert params.model is not None, logger.error(f"Recognizing Control Model failed: {model_filename}")
^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: None

---
To load target model AutoencoderKL
Begin to load 1 model
Moving model(s) has taken 0.11 seconds
*** Error running process_before_every_sampling: /home/mracobes/stable_diffusion/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py
Traceback (most recent call last):
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/modules/scripts.py", line 830, in process_before_every_sampling
script.process_before_every_sampling(p, *script_args, **kwargs)
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/mracobes/stable_diffusion/stable-diffusion-webui-forge/extensions-builtin/sd_forge_controlnet/scripts/controlnet.py", line 533, in process_before_every_sampling
self.process_unit_before_every_sampling(p, unit, self.current_params[i], *args, **kwargs)
~~~~~~~~~~~~~~~~~~~^^^
KeyError: 0

---

Additional information

No response

f-rank commented 8 months ago

Same thing happening here. I select pre-processor ClipVision and Model t2iadapter_style_sd14v1 and it spews " Recognizing Control Model failed:" in console.