Haoming02 / sd-forge-ic-light

An Extension for Forge Webui that implements IC-Light
Apache License 2.0
22 stars 1 forks source link

TypeError: 'NoneType' object is not iterable #9

Open 4KsTan opened 1 day ago

4KsTan commented 1 day ago
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "D:\stable-diffusion-webui-reForge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "D:\stable-diffusion-webui-reForge\modules\txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
  File "D:\stable-diffusion-webui-reForge\modules\processing.py", line 2664, in process_images
    res = process_images_inner(p)
  File "D:\stable-diffusion-webui-reForge\modules\processing.py", line 2815, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "D:\stable-diffusion-webui-reForge\modules\processing.py", line 3189, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "D:\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 261, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\stable-diffusion-webui-reForge\modules\sd_samplers_common.py", line 274, in launch_sampling
    return func()
  File "D:\stable-diffusion-webui-reForge\modules\sd_samplers_kdiffusion.py", line 261, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\modules\sd_samplers_cfg_denoiser.py", line 225, in forward
    denoised = sampling_function(model, x, sigma, uncond_patched, cond_patched, cond_scale, model_options, seed)
  File "D:\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 299, in sampling_function
    cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options)
  File "D:\stable-diffusion-webui-reForge\ldm_patched\modules\samplers.py", line 260, in calc_cond_uncond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
  File "D:\stable-diffusion-webui-reForge\extensions-builtin\sd-forge-ic-light\lib_iclight\classic_ic_light_nodes.py", line 61, in wrapper_func
    return existing_wrapper(unet_apply, params=apply_c_concat(params))
  File "D:\stable-diffusion-webui-reForge\extensions-builtin\sd-forge-ic-light\lib_iclight\classic_ic_light_nodes.py", line 53, in unet_dummy_apply
    return unet_apply(x=params["input"], t=params["timestep"], **params["c"])
  File "D:\stable-diffusion-webui-reForge\ldm_patched\modules\model_base.py", line 90, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 886, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
  File "D:\stable-diffusion-webui-reForge\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 61, in forward_timestep_embed
    x = layer(x)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\ldm_patched\modules\ops.py", line 114, in forward
    return super().forward(*args, **kwargs)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\torch\nn\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead
*** Error completing request
*** Arguments: ('task(2wbc4vysqkrnl14)', <gradio.routes.Request object at 0x000001D50DC21D50>, 'indoor', 'lowres,worst quality,bad quality', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'Euler a', 'Karras', False, '', 0.8, -1, False, -1, 0, 0, 0, True, 'iclight_sd15_fc', 'None', 'Use Background Image', array([[[221, 232, 238, 255],
***         [222, 233, 238, 255],
***         [223, 233, 239, 255],
***         ...,
***         [ 86,  83,  87, 255],
***         [ 87,  84,  88, 255],
***         [ 88,  84,  88, 255]],
***
***        [[223, 234, 237, 255],
***         [224, 235, 240, 255],
***         [224, 234, 239, 255],
***         ...,
***         [ 87,  85,  89, 255],
***         [ 86,  84,  87, 255],
***         [ 87,  83,  87, 255]],
***
***        [[224, 234, 240, 255],
***         [224, 235, 239, 255],
***         [225, 235, 239, 255],
***         ...,
***         [ 87,  85,  89, 255],
***         [ 86,  83,  87, 255],
***         [ 86,  83,  87, 255]],
***
***        ...,
***
***        [[ 58,  65,  71, 255],
***         [ 54,  58,  67, 255],
***         [ 52,  58,  66, 255],
***         ...,
***         [ 49,  56,  66, 255],
***         [ 49,  57,  66, 255],
***         [ 50,  56,  65, 255]],
***
***        [[ 55,  59,  69, 255],
***         [ 52,  58,  66, 255],
***         [ 53,  60,  69, 255],
***         ...,
***         [ 49,  57,  65, 255],
***         [ 48,  57,  65, 255],
***         [ 49,  55,  65, 255]],
***
***        [[ 60,  60,  69, 255],
***         [ 56,  61,  68, 255],
***         [ 53,  59,  68, 255],
***         ...,
***         [ 49,  56,  65, 255],
***         [ 48,  56,  65, 255],
***         [ 53,  56,  65, 255]]], dtype=uint8), None, True, 'u2net_human_seg', 225, 16, 16, False, False, 3, False, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], multi_inputs_gallery=[], generated_image=None, mask_image=None, hr_option=<HiResFixOption.BOTH: 'Both'>, enabled=False, module='None', model='None', weight=1, image=None, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode=<ControlMode.BALANCED: 'Balanced'>, advanced_weighting=None, save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 9, -0.05, 15, 1, False, 0.7, False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "D:\stable-diffusion-webui-reForge\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

使用的是大佬版本的IC-Lightd8a5162)(最新)

已排除可能导致的问题: 1.模型已使用:SD 1.5 2.已关闭所有插件问题依然存在 3.使用reForge/main(最新)

还请大佬过过目

Haoming02 commented 20 hours ago

你好, 我剛剛測試,使用最新 (d8a5162) IC-Light 和最新 (0b7b65e) reForge/main,並沒有遇上任何問題。

請幫我確認一下,你 Webui 開啟時, Console 內是否有出現 calculate_weight Patched! 這段句子。

4KsTan commented 16 hours ago

你好, 我剛剛測試,使用最新 (d8a5162) IC-Light 和最新 (0b7b65e) reForge/main,並沒有遇上任何問題。

  • 測試成功生成的 infotext:
    (high quality, best quality), a woman standing in sunset
    Negative prompt: (low quality, worst quality)
    Steps: 24, Sampler: DPM++ 2M, Schedule type: Karras, CFG scale: 6, Seed: 1945077248, Size: 576x768, Model hash: 15012c538f, Model: realisticVisionV51_v51VAE, RNG: CPU, IC-Light: True, Version: f1.0.2-v1.10.1RC-latest-699-g0b7b65ef

請幫我確認一下,你 Webui 開啟時, Console 內是否有出現 calculate_weight Patched! 這段句子。

venv "D:\stable-diffusion-webui-reForge\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: f1.0.2-v1.10.1RC-latest-699-g0b7b65ef
Commit hash: 0b7b65ef1314e2301ef2bf717be43c2f33bfcaa9
Launching Web UI with arguments: --xformers --pin-shared-memory --cuda-malloc --cuda-stream --api --ckpt-dir 'D:\Models\Stable-diffusion' --vae-dir 'D:\Models\VAE' --lora-dir 'D:\Models\lora' --embeddings-dir 'D:\Models\embeddings' --hypernetwork-dir 'D:\Models\hypernetworks' --esrgan-models-path 'D:\Models\ESRGAN' --swinir-models-path 'D:\Models\SwinIR' --scunet-models-path 'D:\Models\ScuNET' --realesrgan-models-path 'D:\Models\RealESRGAN' --codeformer-models-path 'D:\Models\Codeformer' --gfpgan-models-path 'D:\Models\GFPGAN' --controlnet-dir 'D:\Models\controlnet'
Using cudaMallocAsync backend.
Total VRAM 16380 MB, total RAM 32439 MB
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
  File "D:\stable-diffusion-webui-reForge\venv\lib\site-packages\xformers\__init__.py", line 57, in _is_triton_available
    import triton  # noqa
ModuleNotFoundError: No module named 'triton'
xformers version: 0.0.27
Set vram state to: NORMAL_VRAM
Always pin shared GPU memory
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
VAE dtype: torch.bfloat16
CUDA Stream Activated:  True
D:\stable-diffusion-webui-reForge\venv\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
Using xformers cross attention
*** "Disable all extensions" option was set, will only load built-in extensions ***
ControlNet preprocessor location: D:\stable-diffusion-webui-reForge\models\ControlNetPreprocessor
D:\stable-diffusion-webui-reForge\venv\lib\site-packages\diffusers\models\transformers\transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
  deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)

calculate_weight Patched!

Loading model SD 1.5\meinamix_v12Final.safetensors [a5e5a941a3] (1 of 1)
Loading weights [a5e5a941a3] from D:\Models\Stable-diffusion\SD 1.5\meinamix_v12Final.safetensors
2024-10-24 14:42:48,689 - ControlNet - INFO - ControlNet UI callback registered.
model_type EPS
UNet ADM Dimension 0
Running on local URL:  http://127.0.0.1:7860
Using xformers attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using xformers attention in VAE
D:\stable-diffusion-webui-reForge\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
Loading VAE weights specified in settings: D:\Models\VAE\Counterfeit-V2.5.vae.pt
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) =  15212.686912536621
[Memory Management] Model Memory (MB) =  454.20703506469727
[Memory Management] Minimal Inference Memory (MB) =  1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) =  13734.479877471924
Moving model(s) has taken 0.05 seconds
Model SD 1.5\meinamix_v12Final.safetensors [a5e5a941a3] loaded in 1.9s (load weights from disk: 0.2s, forge load real models: 1.0s, load VAE: 0.4s, calculate empty prompt: 0.2s).

To create a public link, set `share=True` in `launch()`.
Startup time: 12.3s (prepare environment: 1.8s, import torch: 3.3s, import gradio: 0.7s, setup paths: 1.0s, initialize shared: 0.2s, other imports: 0.3s, load scripts: 1.9s, create ui: 0.5s, gradio launch: 2.4s, add APIs: 0.3s).

感谢回复,calculate_weight Patched! 的确存在。