lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.02k stars 685 forks source link

[Bug]: KeyError: 'Karras' | TypeError: 'NoneType' object is not iterable #847

Open asoupx38 opened 1 month ago

asoupx38 commented 1 month ago

Checklist

What happened?

I tried forge for the first time and it was working properly, it even worked with the Pony model. I tried adding a pre-existed photo i had trough ''PNG Info'' to get the prompt and clicked ''Send to txt2img''

After that I deleted the pre-existed settings, like the sampler and vae. When tried to make it work again, it didn't.

I then deleted the venv folder and used ''webui.bat'' to re-install but still, it didn't work.

Browser used: Vivaldi

Steps to reproduce the problem

  1. Open webui-user.bat
  2. Input prompts
  3. Generate image

What should have happened?

WebUI should generate the image but it keeps showing a text just below where the image should be with the text: ''TypeError: 'NoneType' object is not iterable''

vivaldi_DSlEecJfAh

What browsers do you use to access the UI ?

Other

Sysinfo

sysinfo-2024-07-19-01-24.json

Console logs

venv "C:\Users\toryn\forge\stable-diffusion-webui-forge\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: f0.0.17v1.8.0rc-latest-278-gbfee03d8
Commit hash: bfee03d8d9415a925616f40ede030fe7a51cbcfd
Installing forge_legacy_preprocessor requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
Installing sd-forge-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
Launching Web UI with arguments:
Total VRAM 8191 MB, total RAM 32666 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3070 Ti : native
Hint: your device supports --pin-shared-memory for potential speed improvements.
Hint: your device supports --cuda-malloc for potential speed improvements.
Hint: your device supports --cuda-stream for potential speed improvements.
VAE dtype: torch.bfloat16
CUDA Stream Activated:  False
Using pytorch cross attention
ControlNet preprocessor location: C:\Users\toryn\forge\stable-diffusion-webui-forge\models\ControlNetPreprocessor
Loading weights [fe9d8d2f64] from C:\Users\toryn\forge\stable-diffusion-webui-forge\models\Stable-diffusion\AnimeStyle\Anime\sakuramochimix_v10.safetensors
2024-07-18 19:15:48,422 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 13.6s (prepare environment: 6.2s, import torch: 2.9s, import gradio: 0.7s, setup paths: 0.8s, initialize shared: 0.1s, other imports: 0.5s, load scripts: 1.5s, create ui: 0.6s, gradio launch: 0.2s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Loading VAE weights specified in settings: C:\Users\toryn\forge\stable-diffusion-webui-forge\models\VAE\kl-f8-anime2.ckpt
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  7092.9990234375
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  5614.791400909424
Moving model(s) has taken 0.08 seconds
Model loaded in 2.9s (load weights from disk: 0.3s, forge load real models: 2.1s, load VAE: 0.2s, calculate empty prompt: 0.2s).
Calculating sha256 for C:\Users\toryn\forge\stable-diffusion-webui-forge\models\Stable-diffusion\AnimeStyle\Anime\aingdiffusion_v85.safetensors: 6578ed596f9838fa4ed84a11111a8fcac82bc4a54bda509c75b991ba689e7515
Loading weights [6578ed596f] from C:\Users\toryn\forge\stable-diffusion-webui-forge\models\Stable-diffusion\AnimeStyle\Anime\aingdiffusion_v85.safetensors
model_type EPS
UNet ADM Dimension 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
Loading VAE weights specified in settings: C:\Users\toryn\forge\stable-diffusion-webui-forge\models\VAE\kl-f8-anime2.ckpt
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  7048.6484375
[Memory Management] Model Memory (MB) =  454.2076225280762
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  5570.440814971924
Moving model(s) has taken 0.20 seconds
Model loaded in 3.5s (unload existing model: 0.3s, calculate hash: 1.5s, forge load real models: 1.1s, load VAE: 0.2s, calculate empty prompt: 0.3s).
To load target model BaseModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  6731.4716796875
[Memory Management] Model Memory (MB) =  1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  4068.0579147338867
Moving model(s) has taken 0.26 seconds
Traceback (most recent call last):
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\processing.py", line 1275, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 214, in sample
    sigmas = self.get_sigmas(p, steps).to(x.device)
  File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 111, in get_sigmas
    sigmas_func = k_diffusion_scheduler[opts.k_sched_type]
KeyError: 'Karras'
'Karras'
*** Error completing request
*** Arguments: ('task(bubis4f8qwlf5v7)', <gradio.routes.Request object at 0x0000014B1DF3AAD0>, '1girl, pink hair', 'bad anatomy, worst anatomy', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\toryn\forge\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

---

Additional information

Browser used was Vivaldi.

Panchovix commented 1 month ago

That's because the PNG you imported has the new schedulers from A1111, and since Forge doesn't have that update, k_diffusion_scheduler[opts.k_sched_type] doesn't exist as like on the PNG

seeus00 commented 1 month ago

Go to settings -> type scheduler in the search bar -> under "scheduler type", select "automatic"