lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.86k stars 756 forks source link

IndexError: tuple index out of range #1845

Closed MqtUA closed 3 weeks ago

MqtUA commented 3 weeks ago

So each time I try to generate something with Flux Dev, I get this error.

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f2.0.1v1.10.1-previous-532-g791f04f7
Commit hash: 791f04f71e81bfc20a14fba1bbb8a11404c9a595
Launching Web UI with arguments:
Total VRAM 8188 MB, total RAM 32376 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
D:\SD\Forge_Latest\system\python\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: D:\SD\Forge_Latest\webui\models\ControlNetPreprocessor
[-] ADetailer initialized. version: 24.8.0, num models: 14
2024-09-17 15:59:49,565 - ControlNet - INFO - ControlNet UI callback registered.
D:\SD\Forge_Latest\webui\extensions\sd-webui-civbrowser\scripts\civsfz_ui.py:127: GradioDeprecationWarning: unexpected argument for Textbox: choices
  grtxtSaveFilename = gr.Textbox(label="Save file name", choices=[], interactive=True, value=None)
Model selected: {'checkpoint_info': {'filename': 'D:\\SD\\Forge_Latest\\webui\\models\\Stable-diffusion\\flux1-dev-bnb-nf4-v2.safetensors', 'hash': 'f0770152'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 16.1s (prepare environment: 2.1s, import torch: 4.8s, initialize shared: 0.2s, other imports: 0.3s, load scripts: 1.7s, initialize google blockly: 0.1s, create ui: 4.2s, gradio launch: 2.3s, app_started_callback: 0.3s).
Environment vars changed: {'stream': False, 'inference_memory': 1074.0, 'pin_shared_memory': True}
[GPU Setting] You will use 86.88% GPU memory (7113.00 MB) to load weights, and use 13.12% GPU memory (1074.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
[GPU Setting] You will use 87.49% GPU memory (7163.00 MB) to load weights, and use 12.51% GPU memory (1024.00 MB) to do matrix computation.
Environment vars changed: {'stream': False, 'inference_memory': 1074.0, 'pin_shared_memory': True}
[GPU Setting] You will use 86.88% GPU memory (7113.00 MB) to load weights, and use 13.12% GPU memory (1074.00 MB) to do matrix computation.
Model selected: {'checkpoint_info': {'filename': 'D:\\SD\\Forge_Latest\\webui\\models\\Stable-diffusion\\flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'additional_modules': [], 'unet_storage_dtype': None}
Using online LoRAs in FP16: False
Loading Model: {'checkpoint_info': {'filename': 'D:\\SD\\Forge_Latest\\webui\\models\\Stable-diffusion\\flux1-dev-bnb-nf4.safetensors', 'hash': '0184473b'}, 'additional_modules': [], 'unet_storage_dtype': None}
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
StateDict Keys: {'transformer': 2350, 'vae': 244, 'text_encoder': 198, 'text_encoder_2': 220, 'ignore': 0}
Using Detected T5 Data Type: torch.float8_e4m3fn
Using Detected UNet Type: nf4
Using pre-quant state dict!
Working with z of shape (1, 16, 32, 32) = 16384 dimensions.
K-Model Created: {'storage_dtype': 'nf4', 'computation_dtype': torch.bfloat16}
Model loaded in 1.9s (unload existing model: 0.2s, forge model load: 1.6s).
Skipping unconditional conditioning when CFG = 1. Negative Prompts are ignored.
[Unload] Trying to free 7775.00 MB for cuda:0 with 0 models keep loaded ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 7087.00 MB, Model Require: 5154.62 MB, Previously Loaded: 0.00 MB, Inference Require: 1074.00 MB, Remaining: 858.38 MB, All loaded to GPU.
Moving model(s) has taken 7.87 seconds
Traceback (most recent call last):
  File "D:\SD\Forge_Latest\webui\modules_forge\main_thread.py", line 30, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\SD\Forge_Latest\webui\modules\txt2img.py", line 123, in txt2img_function
    processed = processing.process_images(p)
  File "D:\SD\Forge_Latest\webui\modules\processing.py", line 817, in process_images
    res = process_images_inner(p)
  File "D:\SD\Forge_Latest\webui\modules\processing.py", line 930, in process_images_inner
    p.setup_conds()
  File "D:\SD\Forge_Latest\webui\modules\processing.py", line 1526, in setup_conds
    super().setup_conds()
  File "D:\SD\Forge_Latest\webui\modules\processing.py", line 502, in setup_conds
    self.c = self.get_conds_with_caching(prompt_parser.get_multicond_learned_conditioning, prompts, total_steps, [self.cached_c], self.extra_network_data)
  File "D:\SD\Forge_Latest\webui\modules\processing.py", line 471, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps, hires_steps, shared.opts.use_old_scheduling)
  File "D:\SD\Forge_Latest\webui\modules\prompt_parser.py", line 262, in get_multicond_learned_conditioning
    learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps, hires_steps, use_old_scheduling)
  File "D:\SD\Forge_Latest\webui\modules\prompt_parser.py", line 189, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "D:\SD\Forge_Latest\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\SD\Forge_Latest\webui\backend\diffusion_engine\flux.py", line 78, in get_learned_conditioning
    cond_l, pooled_l = self.text_processing_engine_l(prompt)
  File "D:\SD\Forge_Latest\webui\backend\text_processing\classic_engine.py", line 268, in __call__
    z = self.process_tokens(tokens, multipliers)
  File "D:\SD\Forge_Latest\webui\backend\text_processing\classic_engine.py", line 301, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "D:\SD\Forge_Latest\webui\backend\text_processing\classic_engine.py", line 134, in encode_with_transformers
    z = outputs.hidden_states[layer_id]
IndexError: tuple index out of range
tuple index out of range

It broke down a few updates away, cos before everything worked fine.

DenOfEquity commented 3 weeks ago

layer_id in the line with the error is CLIP skip, the error suggests it has been set to an impossible value. If simply resetting it by adjusting the Clip skip slider doesn't work, edit config.json from the {Forge install location\webui directory, remove the line "CLIP_stop_at_last_layers": .... Or delete that file (but this will also lose any other Settings you've changed).

MqtUA commented 3 weeks ago

layer_id in the line with the error is CLIP skip, the error suggests it has been set to an impossible value. If simply resetting it by adjusting the Clip skip slider doesn't work, edit config.json from the {Forge install location\webui directory, remove the line "CLIP_stop_at_last_layers": .... Or delete that file (but this will also lose any other Settings you've changed).

lol, indeed it was set to 7113, dunno why,

Deleting that line worked. Thank you.