Mikubill / sd-webui-controlnet

WebUI extension for ControlNet
GNU General Public License v3.0
16.98k stars 1.96k forks source link

[Bug]: AttributeError: 'Slider' object has no attribute 'elem_classes' #716

Closed Lalimec closed 1 year ago

Lalimec commented 1 year ago

Is there an existing issue for this?

What happened?

I was happily working on my generations until decided to do a git pull in extensions folder. And now im having this weird error, and it seems to be caused by gradio or some controlnet ui elements i guess. How can i fix this.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

What should have happened?

Should ve started as it used to do

Commit where the problem happens

webui: 22bcc7be428c94e9408f589966c2040187245d81 controlnet: 241c05f8c9d3c5abe637187e3c4bb46f17447029

What browsers do you use to access the UI ?

No response

Command Line Arguments

--no-half-vae --disable-nan-check --listen --max-batch-count 16 --deepdanbooru --allow-code --theme dark --enable-insecure-extension-access --gradio-img2img-tool color-sketch --disable-safe-unpickle --autolaunch --api --enable-console-prompts --cors-allow-origins=http://localhost:9999 --cors-allow-origins=https://www.painthua.com

Console logs

(base) ubuntu@ip-172-31-34-82:~/stable-diffusion-webui$ bash webui.sh

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on ubuntu user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03)
[GCC 10.4.0]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI
Installing requirements for Batch Face Swap

loading Smart Crop reqs from /home/ubuntu/stable-diffusion-webui/extensions/sd_smartprocess/requirements.txt
Checking Smart Crop requirements.

If submitting an issue on github, please provide the full startup log for debugging purposes.

Initializing Dreambooth
Dreambooth revision: 926ae204ef5de17efca2059c334b6098492a0641
Successfully installed accelerate-0.18.0 gitpython-3.1.31 requests-2.28.2 transformers-4.26.1

Does your project take forever to startup?
Repetitive dependency installation may be the reason.
Automatic1111's base project sets strict requirements on outdated dependencies.
If an extension is using a newer version, the dependency is uninstalled and reinstalled twice every startup.

[+] xformers version 0.0.17.dev464 installed.
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.
[+] accelerate version 0.18.0 installed.
[+] diffusers version 0.14.0 installed.
[+] transformers version 4.26.1 installed.
[+] bitsandbytes version 0.35.4 installed.

Installing sd-dynamic-prompts requirements.txt

Installing None
Installing onnxruntime-gpu...
Installing None
Installing opencv-python...
Installing None
Installing Pillow...

current transparent-background 1.2.3

Installing imageio-ffmpeg requirement for depthmap script

Launching Web UI with arguments: --no-half-vae --disable-nan-check --listen --max-batch-count 16 --deepdanbooru --allow-code --theme dark --enable-insecure-extension-access --gradio-img2img-tool color-sketch --disable-safe-unpickle --autolaunch --api --enable-console-prompts --cors-allow-origins=http://localhost:9999 --cors-allow-origins=https://www.painthua.com
2023-04-07 10:50:41.615967: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-07 10:50:42.293913: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
No module 'xformers'. Proceeding without it.
Civitai Helper: Get Custom Model Folder
Civitai Helper: Load setting from: /home/ubuntu/stable-diffusion-webui/extensions/Stable-Diffusion-Webui-Civitai-Helper/setting.json
Civitai Helper: No setting file, use default
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████| 1/1 [00:00<00:00, 2428.66it/s]
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████| 1/1 [00:00<00:00, 1818.87it/s]
Loading weights [8712e20a5d] from /home/ubuntu/stable-diffusion-webui/models/Stable-diffusion/_general/Anything-V3.0.ckpt
Creating model from config: /home/ubuntu/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
loading file vocab.json from cache at /home/ubuntu/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/vocab.json
loading file merges.txt from cache at /home/ubuntu/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/merges.txt
loading file added_tokens.json from cache at None
loading file special_tokens_map.json from cache at /home/ubuntu/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/special_tokens_map.json
loading file tokenizer_config.json from cache at /home/ubuntu/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/tokenizer_config.json
loading configuration file config.json from cache at /home/ubuntu/.cache/huggingface/hub/models--openai--clip-vit-large-patch14/snapshots/8d052a0f05efbaefbc9e8786ba291cfdf93e5bff/config.json
Model config CLIPTextConfig {
  "attention_dropout": 0.0,
  "bos_token_id": 0,
  "dropout": 0.0,
  "eos_token_id": 2,
  "hidden_act": "quick_gelu",
  "hidden_size": 768,
  "initializer_factor": 1.0,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-05,
  "max_position_embeddings": 77,
  "model_type": "clip_text_model",
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "pad_token_id": 1,
  "projection_dim": 768,
  "transformers_version": "4.26.1",
  "vocab_size": 49408
}

All model checkpoint weights were used when initializing CLIPTextModel.

Some weights of CLIPTextModel were not initialized from the model checkpoint at None and are newly initialized: ['text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.final_layer_norm.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.final_layer_norm.bias', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.embeddings.position_ids', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.embeddings.token_embedding.weight', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Loading VAE weights found near the checkpoint: /home/ubuntu/stable-diffusion-webui/models/Stable-diffusion/_general/Anything-V3.0.vae.pt
Applying cross attention optimization (Doggettx).
Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
Textual inversion embeddings loaded(22): lyrcml-1600, angry512, sad512, defiance512, charturnerv2, atompunkstylesd15, samdoesarts-style, nervous512, makeitpop_style, lyrcml, lyrcml-1200, txtinv-2000, laugh512, lyrcml-2000, j3nna0rt3ga, roboface-style, smile512, grin512, shock512, hamunaptra-style, lyrcml-2400, happy512
Textual inversion embeddings skipped(1): knollingcase
Model loaded in 5.5s (load weights from disk: 2.5s, create model: 0.4s, apply weights to model: 0.3s, apply half(): 0.2s, load VAE: 1.8s, move model to device: 0.3s).
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/ubuntu/stable-diffusion-webui/launch.py:356 in <module>                │
│                                                                              │
│   353                                                                        │
│   354 if __name__ == "__main__":                                             │
│   355 │   prepare_environment()                                              │
│ ❱ 356 │   start()                                                            │
│   357                                                                        │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/launch.py:351 in start                   │
│                                                                              │
│   348 │   if '--nowebui' in sys.argv:                                        │
│   349 │   │   webui.api_only()                                               │
│   350 │   else:                                                              │
│ ❱ 351 │   │   webui.webui()                                                  │
│   352                                                                        │
│   353                                                                        │
│   354 if __name__ == "__main__":                                             │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/webui.py:243 in webui                    │
│                                                                              │
│   240 │   │   modules.script_callbacks.before_ui_callback()                  │
│   241 │   │   startup_timer.record("scripts before_ui_callback")             │
│   242 │   │                                                                  │
│ ❱ 243 │   │   shared.demo = modules.ui.create_ui()                           │
│   244 │   │   startup_timer.record("create ui")                              │
│   245 │   │                                                                  │
│   246 │   │   if not cmd_opts.no_gradio_queue:                               │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/modules/ui.py:446 in create_ui           │
│                                                                              │
│    443 │   parameters_copypaste.reset()                                      │
│    444 │                                                                     │
│    445 │   modules.scripts.scripts_current = modules.scripts.scripts_txt2img │
│ ❱  446 │   modules.scripts.scripts_txt2img.initialize_scripts(is_img2img=Fal │
│    447 │                                                                     │
│    448 │   with gr.Blocks(analytics_enabled=False) as txt2img_interface:     │
│    449 │   │   txt2img_prompt, txt2img_prompt_styles, txt2img_negative_promp │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/modules/scripts.py:298 in                │
│ initialize_scripts                                                           │
│                                                                              │
│   295 │   │   auto_processing_scripts = scripts_auto_postprocessing.create_a │
│   296 │   │                                                                  │
│   297 │   │   for script_class, path, basedir, script_module in auto_process │
│ ❱ 298 │   │   │   script = script_class()                                    │
│   299 │   │   │   script.filename = path                                     │
│   300 │   │   │   script.is_txt2img = not is_img2img                         │
│   301 │   │   │   script.is_img2img = is_img2img                             │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/c │
│ ontrolnet.py:127 in __init__                                                 │
│                                                                              │
│   124 │   │   self.unloadable = global_state.cn_preprocessor_unloadable      │
│   125 │   │   self.input_image = None                                        │
│   126 │   │   self.latest_model_hash = ""                                    │
│ ❱ 127 │   │   self.txt2img_w_slider = gr.Slider()                            │
│   128 │   │   self.txt2img_h_slider = gr.Slider()                            │
│   129 │   │   self.img2img_w_slider = gr.Slider()                            │
│   130 │   │   self.img2img_h_slider = gr.Slider()                            │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/venv/lib/python3.9/site-packages/gradio/ │
│ components.py:683 in __init__                                                │
│                                                                              │
│    680 │   │   self.step = step                                              │
│    681 │   │   if randomize:                                                 │
│    682 │   │   │   value = self.get_random_value                             │
│ ❱  683 │   │   IOComponent.__init__(                                         │
│    684 │   │   │   self,                                                     │
│    685 │   │   │   label=label,                                              │
│    686 │   │   │   every=every,                                              │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/modules/scripts.py:544 in                │
│ IOComponent_init                                                             │
│                                                                              │
│   541 │                                                                      │
│   542 │   res = original_IOComponent_init(self, *args, **kwargs)             │
│   543 │                                                                      │
│ ❱ 544 │   add_classes_to_gradio_component(self)                              │
│   545 │                                                                      │
│   546 │   script_callbacks.after_component_callback(self, **kwargs)          │
│   547                                                                        │
│                                                                              │
│ /home/ubuntu/stable-diffusion-webui/modules/scripts.py:529 in                │
│ add_classes_to_gradio_component                                              │
│                                                                              │
│   526 │   this adds gradio-* to the component for css styling (ie gradio-but │
│   527 │   """                                                                │
│   528 │                                                                      │
│ ❱ 529 │   comp.elem_classes = ["gradio-" + comp.get_block_name(), *(comp.ele │
│   530 │                                                                      │
│   531 │   if getattr(comp, 'multiselect', False):                            │
│   532 │   │   comp.elem_classes.append('multiselect')                        │
╰──────────────────────────────────────────────────────────────────────────────╯
AttributeError: 'Slider' object has no attribute 'elem_classes'

Additional information

No response

Lalimec commented 1 year ago

Ok i guess i messed up the requirements file, and that resulted in this error. Recovering the original one fixed the issue.