huchenlei / ComfyUI-IC-Light-Native

ComfyUI native implementation of IC-Light
Apache License 2.0
527 stars 27 forks source link

KSampler Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead #51

Closed Sinyuk7 closed 1 week ago

Sinyuk7 commented 1 month ago

I also encountered the same problem; updating dependencies did not solve the issue!
All the relevant Git repositories have been pulled with git pull; the issue still persists;

image

Sinyuk7 commented 1 month ago

ComfyUI Error Report

Error Details

## System Information
- **ComfyUI Version:** v0.2.2-55-g3326bdf
- **Arguments:** ComfyUI\main.py --windows-standalone-build --fast
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.4.1+cu121
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 4090 Laptop GPU : cudaMallocAsync
  - **Type:** cuda
  - **VRAM Total:** 17170956288
  - **VRAM Free:** 13561296230
  - **Torch VRAM Total:** 2147483648
  - **Torch VRAM Free:** 5305702

## Logs

2024-09-20 02:13:27,518 - root - INFO - Total VRAM 16376 MB, total RAM 65271 MB 2024-09-20 02:13:27,518 - root - INFO - pytorch version: 2.4.1+cu121 2024-09-20 02:13:27,519 - root - INFO - Set vram state to: NORMAL_VRAM 2024-09-20 02:13:27,519 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 Laptop GPU : cudaMallocAsync 2024-09-20 02:13:28,019 - root - INFO - Using pytorch cross attention 2024-09-20 02:13:28,896 - root - INFO - [Prompt Server] web root: D:\ComfyUI_windows_portable\ComfyUI\web 2024-09-20 02:13:28,897 - root - INFO - Adding extra search path checkpoints D:\sd-webui-aki-v4.2\models/Stable-diffusion 2024-09-20 02:13:28,897 - root - INFO - Adding extra search path configs D:\sd-webui-aki-v4.2\models/Stable-diffusion 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path vae D:\sd-webui-aki-v4.2\models/VAE 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path loras D:\sd-webui-aki-v4.2\models/Lora 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path loras D:\sd-webui-aki-v4.2\models/LyCORIS 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path upscale_models D:\sd-webui-aki-v4.2\models/ESRGAN 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path upscale_models D:\sd-webui-aki-v4.2\models/RealESRGAN 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path upscale_models D:\sd-webui-aki-v4.2\models/SwinIR 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path embeddings D:\sd-webui-aki-v4.2\embeddings 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path hypernetworks D:\sd-webui-aki-v4.2\models/hypernetworks 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path controlnet D:\sd-webui-aki-v4.2\models/ControlNet 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path checkpoints D:\ComfyUI_windows_portable\ComfyUI\models/checkpoints/ 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path clip D:\ComfyUI_windows_portable\ComfyUI\models/clip/ 2024-09-20 02:13:28,898 - root - INFO - Adding extra search path clip_vision D:\ComfyUI_windows_portable\ComfyUI\models/clip_vision/ 2024-09-20 02:13:28,899 - root - INFO - Adding extra search path configs D:\ComfyUI_windows_portable\ComfyUI\models/configs/ 2024-09-20 02:13:28,899 - root - INFO - Adding extra search path controlnet D:\ComfyUI_windows_portable\ComfyUI\models/controlnet/ 2024-09-20 02:13:28,899 - root - INFO - Adding extra search path embeddings D:\ComfyUI_windows_portable\ComfyUI\models/embeddings/ 2024-09-20 02:13:28,899 - root - INFO - Adding extra search path loras D:\ComfyUI_windows_portable\ComfyUI\models/loras/ 2024-09-20 02:13:28,900 - root - INFO - Adding extra search path upscale_models D:\ComfyUI_windows_portable\ComfyUI\models/upscale_models/ 2024-09-20 02:13:28,900 - root - INFO - Adding extra search path vae D:\ComfyUI_windows_portable\ComfyUI\models/vae/ 2024-09-20 02:13:29,671 - root - INFO - Total VRAM 16376 MB, total RAM 65271 MB 2024-09-20 02:13:29,672 - root - INFO - pytorch version: 2.4.1+cu121 2024-09-20 02:13:29,672 - root - INFO - Set vram state to: NORMAL_VRAM 2024-09-20 02:13:29,672 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 Laptop GPU : cudaMallocAsync 2024-09-20 02:13:30,160 - root - INFO - Import times for custom nodes: 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\websocket_image_save.py 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light-Native 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\cg-use-everywhere 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light-main 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-KJNodes-main 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\x-flux-comfyui 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_essentials 2024-09-20 02:13:30,160 - root - INFO - 0.0 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light 2024-09-20 02:13:30,160 - root - INFO - 0.1 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffuse 2024-09-20 02:13:30,160 - root - INFO - 0.3 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager-main 2024-09-20 02:13:30,160 - root - INFO - 0.5 seconds: D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Easy-Use 2024-09-20 02:13:30,160 - root - INFO - 2024-09-20 02:13:30,167 - root - INFO - Starting server

2024-09-20 02:13:30,167 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-09-20 02:13:34,218 - root - INFO - got prompt 2024-09-20 02:13:34,274 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-09-20 02:13:34,284 - root - INFO - model_type EPS 2024-09-20 02:13:34,642 - root - INFO - Using pytorch attention in VAE 2024-09-20 02:13:34,642 - root - INFO - Using pytorch attention in VAE 2024-09-20 02:13:35,003 - root - INFO - Requested to load AutoencoderKL 2024-09-20 02:13:35,003 - root - INFO - Loading 1 new model 2024-09-20 02:13:35,084 - root - INFO - loaded completely 0.0 159.55708122253418 True 2024-09-20 02:13:35,780 - root - INFO - Requested to load SD1ClipModel 2024-09-20 02:13:35,780 - root - INFO - Loading 1 new model 2024-09-20 02:13:35,876 - root - INFO - loaded completely 0.0 235.84423828125 True 2024-09-20 02:13:37,971 - root - INFO - Requested to load BaseModel 2024-09-20 02:13:37,972 - root - INFO - Loading 1 new model 2024-09-20 02:13:39,392 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) 2024-09-20 02:13:39,489 - root - INFO - loaded completely 0.0 1639.406135559082 True 2024-09-20 02:13:39,600 - root - ERROR - !!! Exception during processing !!! Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead 2024-09-20 02:13:39,606 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 144, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 682, in call return self.predict_noise(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed x = layer(x) ^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward return super().forward(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 458, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead

2024-09-20 02:13:39,610 - root - INFO - Prompt executed in 5.38 seconds 2024-09-20 02:14:57,033 - root - INFO - got prompt 2024-09-20 02:14:59,073 - root - INFO - Requested to load BaseModel 2024-09-20 02:14:59,073 - root - INFO - Loading 1 new model 2024-09-20 02:15:00,616 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 12, 3, 3]) != torch.Size([320, 4, 3, 3]) 2024-09-20 02:15:00,708 - root - INFO - loaded completely 0.0 1639.406135559082 True 2024-09-20 02:15:00,727 - root - ERROR - !!! Exception during processing !!! Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it 2024-09-20 02:15:00,761 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(*inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 691, in inner_sample self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 643, in process_conds conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 554, in encode_model_conds out = model_function(params) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light-main\nodes.py", line 77, in bound_extra_conds return ICLight.extra_conds(self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light-main\nodes.py", line 99, in extra_conds raise Exception(f"Input channels {input_channels} does not match model in_channels {model_in_channels}, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it") Exception: Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it

2024-09-20 02:15:00,764 - root - INFO - Prompt executed in 3.72 seconds 2024-09-20 02:16:01,183 - root - INFO - got prompt 2024-09-20 02:16:03,149 - root - INFO - Requested to load BaseModel 2024-09-20 02:16:03,149 - root - INFO - Loading 1 new model 2024-09-20 02:16:04,692 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) 2024-09-20 02:16:04,789 - root - INFO - loaded completely 0.0 1639.406135559082 True 2024-09-20 02:16:04,818 - root - ERROR - !!! Exception during processing !!! Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead 2024-09-20 02:16:04,820 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 144, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 682, in call return self.predict_noise(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed x = layer(x) ^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward return super().forward(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 458, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead

2024-09-20 02:16:04,824 - root - INFO - Prompt executed in 3.63 seconds 2024-09-20 02:24:34,463 - root - INFO - got prompt 2024-09-20 02:24:35,255 - root - ERROR - !!! Exception during processing !!! Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead 2024-09-20 02:24:35,256 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 144, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 682, in call return self.predict_noise(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed x = layer(x) ^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward return super().forward(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 458, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead

2024-09-20 02:24:35,259 - root - INFO - Prompt executed in 0.79 seconds 2024-09-20 02:28:30,914 - root - INFO - got prompt 2024-09-20 02:28:32,965 - root - INFO - Requested to load BaseModel 2024-09-20 02:28:32,965 - root - INFO - Loading 1 new model 2024-09-20 02:28:34,702 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 12, 3, 3]) != torch.Size([320, 4, 3, 3]) 2024-09-20 02:28:34,797 - root - INFO - loaded completely 0.0 1639.406135559082 True 2024-09-20 02:28:34,822 - root - ERROR - !!! Exception during processing !!! Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it 2024-09-20 02:28:34,824 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(*inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 691, in inner_sample self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 643, in process_conds conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 554, in encode_model_conds out = model_function(params) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light-main\nodes.py", line 77, in bound_extra_conds return ICLight.extra_conds(self, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-IC-Light-main\nodes.py", line 99, in extra_conds raise Exception(f"Input channels {input_channels} does not match model in_channels {model_in_channels}, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it") Exception: Input channels 8 does not match model in_channels 12, 'opt_background' latent input should be used with the IC-Light 'fbc' model, and only with it

2024-09-20 02:28:34,827 - root - INFO - Prompt executed in 3.90 seconds 2024-09-20 02:28:53,931 - root - INFO - got prompt 2024-09-20 02:28:55,915 - root - INFO - Requested to load BaseModel 2024-09-20 02:28:55,916 - root - INFO - Loading 1 new model 2024-09-20 02:28:57,517 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) 2024-09-20 02:28:57,605 - root - INFO - loaded completely 0.0 1639.406135559082 True 2024-09-20 02:28:57,630 - root - ERROR - !!! Exception during processing !!! Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead 2024-09-20 02:28:57,633 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 144, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 682, in call return self.predict_noise(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed x = layer(x) ^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward return super().forward(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 458, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead

2024-09-20 02:28:57,638 - root - INFO - Prompt executed in 3.69 seconds 2024-09-20 02:30:00,021 - root - INFO - got prompt 2024-09-20 02:30:01,937 - root - INFO - Requested to load BaseModel 2024-09-20 02:30:01,937 - root - INFO - Loading 1 new model 2024-09-20 02:30:03,491 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3]) 2024-09-20 02:30:03,584 - root - INFO - loaded completely 0.0 1639.406135559082 True 2024-09-20 02:30:03,613 - root - ERROR - !!! Exception during processing !!! Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead 2024-09-20 02:30:03,615 - root - ERROR - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1430, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1397, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 51, in KSampler_sample return orig_fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample return orig_fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 144, in sample_euler denoised = model(x, sigma_hat s_in, extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in call out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 682, in call return self.predict_noise(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed x = layer(x) ^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ops.py", line 106, in forward return super().forward(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 458, in forward return self._conv_forward(input, self.weight, self.bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward return F.conv2d(input, weight, bias, self.stride, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 96, 96] to have 4 channels, but got 8 channels instead

2024-09-20 02:30:03,618 - root - INFO - Prompt executed in 3.58 seconds

## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":119,"last_link_id":283,"nodes":[{"id":63,"type":"Reroute (rgthree)","pos":{"0":1144,"1":1584},"size":[40,30],"flags":{},"order":17,"mode":0,"inputs":[{"name":"","type":"","link":192,"label":" ","dir":3,"has_old_label":true,"old_label":""}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[191],"label":" ","dir":4,"has_old_label":true,"old_label":""}],"properties":{"resizable":false,"size":[40,30],"connections_layout":["Left","Right"],"ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":51,"type":"GetImageSize+","pos":{"0":100.2005386352539,"1":845.8355712890625},"size":{"0":210,"1":66},"flags":{},"order":11,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":252,"label":"image"}],"outputs":[{"name":"width","type":"INT","links":[173,253],"slot_index":0,"shape":3,"label":"width"},{"name":"height","type":"INT","links":[174,255],"slot_index":1,"shape":3,"label":"height"},{"name":"count","type":"INT","links":null,"shape":3}],"properties":{"Node name for S&R":"GetImageSize+","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":45,"type":"VAEEncode","pos":{"0":98,"1":597},"size":{"0":210,"1":46},"flags":{},"order":12,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":257,"slot_index":0,"label":"pixels"},{"name":"vae","type":"VAE","link":null,"slot_index":1,"label":"vae"}],"outputs":[{"name":"LATENT","type":"LATENT","links":[160],"shape":3,"label":"LATENT"}],"properties":{"Node name for S&R":"VAEEncode","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":42,"type":"GrowMaskWithBlur","pos":{"0":364.2005310058594,"1":1175.8355712890625},"size":{"0":315,"1":246},"flags":{},"order":19,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":152,"label":"mask"}],"outputs":[{"name":"mask","type":"MASK","links":[154],"slot_index":0,"shape":3,"label":"mask"},{"name":"mask_inverted","type":"MASK","links":null,"shape":3,"label":"mask_inverted"}],"properties":{"Node name for S&R":"GrowMaskWithBlur","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":[0,0,true,false,100,1,1,false],"color":"#1f1f48"},{"id":5,"type":"CLIPTextEncode","pos":{"0":49,"1":413},"size":{"0":287.9266662597656,"1":76},"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":5,"label":"clip"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[158],"slot_index":0,"shape":3,"label":"CONDITIONING"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["bad quality"],"color":"#c09430"},{"id":101,"type":"MathExpression|pysssss","pos":{"0":117.64488983154297,"1":1146.797607421875},"size":{"0":210,"1":116.00004577636719},"flags":{},"order":15,"mode":0,"inputs":[{"name":"a","type":"INT,FLOAT,IMAGE,LATENT","link":255,"label":"a"},{"name":"b","type":"INT,FLOAT,IMAGE,LATENT","link":null,"label":"b"},{"name":"c","type":"INT,FLOAT,IMAGE,LATENT","link":null,"label":"c"}],"outputs":[{"name":"INT","type":"INT","links":[256],"slot_index":0,"shape":3,"label":"INT"},{"name":"FLOAT","type":"FLOAT","links":null,"shape":3,"label":"FLOAT"}],"properties":{"Node name for S&R":"MathExpression|pysssss","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":["a1\n\n"],"color":"#1f1f48"},{"id":7,"type":"VAEDecode","pos":{"0":1870,"1":264},"size":{"0":210,"1":46},"flags":{},"order":25,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":77,"label":"samples"},{"name":"vae","type":"VAE","link":null,"slot_index":1,"label":"vae"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[187,259],"slot_index":0,"shape":3,"label":"IMAGE"}],"properties":{"Node name for S&R":"VAEDecode","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":100,"type":"MathExpression|pysssss","pos":{"0":112.64488983154297,"1":964.7975463867188},"size":{"0":210,"1":116.00004577636719},"flags":{},"order":14,"mode":0,"inputs":[{"name":"a","type":"INT,FLOAT,IMAGE,LATENT","link":253,"label":"a"},{"name":"b","type":"INT,FLOAT,IMAGE,LATENT","link":null,"label":"b"},{"name":"c","type":"INT,FLOAT,IMAGE,LATENT","link":null,"label":"c"}],"outputs":[{"name":"INT","type":"INT","links":[254],"slot_index":0,"shape":3,"label":"INT"},{"name":"FLOAT","type":"FLOAT","links":null,"shape":3,"label":"FLOAT"}],"properties":{"Node name for S&R":"MathExpression|pysssss","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":["a1\n\n\n"],"color":"#1f1f48"},{"id":50,"type":"Anything Everywhere","pos":{"0":-344,"1":70},"size":{"0":220,"1":26},"flags":{},"order":8,"mode":0,"inputs":[{"name":"VAE","type":"","link":171,"label":"VAE","color_on":"#FF6E6E"}],"outputs":[],"properties":{"Node name for S&R":"Anything Everywhere","group_restricted":0,"color_restricted":0,"ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":[],"color":"#1f1f48"},{"id":99,"type":"DF_Image_scale_to_side","pos":{"0":-399,"1":598},"size":{"0":315,"1":130},"flags":{},"order":9,"mode":0,"inputs":[{"name":"image","type":"IMAGE","link":250,"label":"image"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[252,257,270],"slot_index":0,"shape":3,"label":"IMAGE"}],"properties":{"Node name for S&R":"DF_Image_scale_to_side","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":[768,"Longest","bicubic","disabled"],"color":"#1f1f48"},{"id":62,"type":"Reroute (rgthree)","pos":{"0":87,"1":1574},"size":[40,30],"flags":{},"order":13,"mode":0,"inputs":[{"name":"","type":"","link":270,"label":" ","dir":3,"has_old_label":true,"old_label":""}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[192],"label":" ","dir":4,"has_old_label":true,"old_label":""}],"properties":{"resizable":false,"size":[40,30],"connections_layout":["Left","Right"],"ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":22,"type":"CreateShapeMask","pos":{"0":369.1140441894531,"1":848.235107421875},"size":{"0":315,"1":270},"flags":{},"order":18,"mode":0,"inputs":[{"name":"frame_width","type":"INT","link":173,"slot_index":0,"widget":{"name":"frame_width"},"label":"frame_width"},{"name":"frame_height","type":"INT","link":174,"widget":{"name":"frame_height"},"label":"frame_height"},{"name":"shape_width","type":"INT","link":254,"widget":{"name":"shape_width"},"label":"shape_width"},{"name":"shape_height","type":"INT","link":256,"widget":{"name":"shape_height"},"label":"shape_height"}],"outputs":[{"name":"mask","type":"MASK","links":[152],"slot_index":0,"shape":3,"label":"mask"},{"name":"mask_inverted","type":"MASK","links":null,"shape":3,"label":"mask_inverted"}],"properties":{"Node name for S&R":"CreateShapeMask","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":["circle",1,0,0,0,512,512,800,800],"color":"#1f1f48"},{"id":35,"type":"MaskToImage","pos":{"0":805.114013671875,"1":967.235107421875},"size":{"0":210,"1":26},"flags":{},"order":21,"mode":0,"inputs":[{"name":"mask","type":"MASK","link":155,"label":"mask"}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[169,268],"slot_index":0,"shape":3,"label":"IMAGE"}],"properties":{"Node name for S&R":"MaskToImage","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":111,"type":"VAEEncode","pos":{"0":1127,"1":600},"size":{"0":210,"1":46},"flags":{},"order":23,"mode":0,"inputs":[{"name":"pixels","type":"IMAGE","link":268,"label":"pixels"},{"name":"vae","type":"VAE","link":null,"label":"vae"}],"outputs":[{"name":"LATENT","type":"LATENT","links":[269],"slot_index":0,"shape":3,"label":"LATENT"}],"properties":{"Node name for S&R":"VAEEncode","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":43,"type":"RemapMaskRange","pos":{"0":730.7982788085938,"1":835.2877807617188},"size":{"0":315,"1":82},"flags":{},"order":20,"mode":4,"inputs":[{"name":"mask","type":"MASK","link":154,"label":"mask"}],"outputs":[{"name":"mask","type":"MASK","links":[155],"slot_index":0,"shape":3,"label":"mask"}],"properties":{"Node name for S&R":"RemapMaskRange","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":[0.08,0.85],"color":"#1f1f48"},{"id":41,"type":"PreviewImage","pos":{"0":717,"1":1046},"size":{"0":377.3042297363281,"1":387.90838623046875},"flags":{},"order":22,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":169,"slot_index":0,"label":"images"}],"outputs":[],"properties":{"Node name for S&R":"PreviewImage","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":4,"type":"CLIPTextEncode","pos":{"0":46,"1":279},"size":{"0":290.89105224609375,"1":76},"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":4,"label":"clip"}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[157],"slot_index":0,"shape":3,"label":"CONDITIONING"}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["1girl,fsunset lighting, garden background"],"color":"#c09430"},{"id":60,"type":"Image Comparer (rgthree)","pos":{"0":1631,"1":956},"size":{"0":861.8671875,"1":625.3927612304688},"flags":{},"order":26,"mode":0,"inputs":[{"name":"image_a","type":"IMAGE","link":191,"label":"image_a","dir":3},{"name":"image_b","type":"IMAGE","link":187,"label":"image_b","dir":3}],"outputs":[],"properties":{"comparer_mode":"Slide","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":[[{"url":"/view?filename=rgthree.compare._temp_tatiy00003.png&type=temp&subfolder=&rand=0.006208143769111718","name":"A","selected":true},{"url":"/view?filename=rgthree.compare._temp_tatiy00004.png&type=temp&subfolder=&rand=0.008069675296547674","name":"B","selected":true}]],"color":"#1f1f48"},{"id":19,"type":"KSampler","pos":{"0":1442,"1":256},"size":{"0":315,"1":262},"flags":{},"order":24,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":277,"slot_index":0,"label":"model"},{"name":"positive","type":"CONDITIONING","link":281,"label":"positive"},{"name":"negative","type":"CONDITIONING","link":162,"slot_index":2,"label":"negative"},{"name":"latent_image","type":"LATENT","link":269,"label":"latent_image"}],"outputs":[{"name":"LATENT","type":"LATENT","links":[77],"slot_index":0,"shape":3,"label":"LATENT"}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[513614387279965,"fixed",25,2,"euler","sgm_uniform",1],"color":"#346434"},{"id":103,"type":"SaveImage","pos":{"0":1870,"1":374},"size":{"0":602.5935668945312,"1":505.34771728515625},"flags":{},"order":27,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":259,"label":"images"}],"outputs":[],"properties":{"Node name for S&R":"SaveImage","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":["lesson-iclighting/a"],"color":"#1f1f48"},{"id":117,"type":"Note","pos":{"0":1091,"1":840},"size":{"0":210,"1":58},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["左边这个节点时对整个画面的灰度做柔化处理,测试一下即可发现区别"],"color":"#432","bgcolor":"#653"},{"id":118,"type":"Note","pos":{"0":727,"1":-36},"size":{"0":210,"1":58},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["放在comfyui/models/unet中\n可以创建子文件夹"],"color":"#432","bgcolor":"#653"},{"id":119,"type":"Note","pos":{"0":114,"1":1315},"size":{"0":210,"1":58},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[],"properties":{"text":""},"widgets_values":["上面这两个算式只是为了方便灵活控制光圈的大小,可以去掉"],"color":"#432","bgcolor":"#653"},{"id":2,"type":"CheckpointLoaderSimple","pos":{"0":-398,"1":154},"size":{"0":315,"1":98},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[271],"slot_index":0,"shape":3,"label":"MODEL"},{"name":"CLIP","type":"CLIP","links":[4,5],"slot_index":1,"shape":3,"label":"CLIP"},{"name":"VAE","type":"VAE","links":[171],"slot_index":2,"shape":3,"label":"VAE"}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["15\majicmixRealistic_v6.safetensors"],"color":"#346434"},{"id":9,"type":"LoadImage","pos":{"0":-399,"1":802},"size":{"0":315,"1":314.0000305175781},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[250],"slot_index":0,"shape":3,"label":"IMAGE"},{"name":"MASK","type":"MASK","links":[],"shape":3,"label":"MASK"}],"properties":{"Node name for S&R":"LoadImage","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":["example.png","image"],"color":"#1f1f48"},{"id":57,"type":"Reroute (rgthree)","pos":{"0":1166,"1":132},"size":[40,30],"flags":{},"order":10,"mode":0,"inputs":[{"name":"","type":"","link":283,"label":" ","dir":3,"has_old_label":true,"old_label":""}],"outputs":[{"name":"MODEL","type":"MODEL","links":[277],"slot_index":0,"label":" ","dir":4,"has_old_label":true,"old_label":""}],"properties":{"resizable":false,"size":[40,30],"connections_layout":["Left","Right"],"ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"color":"#1f1f48"},{"id":44,"type":"ICLightConditioning","pos":{"0":653,"1":281},"size":{"0":347.8768615722656,"1":138},"flags":{},"order":16,"mode":0,"inputs":[{"name":"positive","type":"CONDITIONING","link":157,"label":"positive"},{"name":"negative","type":"CONDITIONING","link":158,"label":"negative"},{"name":"vae","type":"VAE","link":null,"slot_index":2,"label":"vae"},{"name":"foreground","type":"LATENT","link":160,"slot_index":3,"label":"foreground"},{"name":"opt_background","type":"LATENT","link":null,"slot_index":4,"label":"opt_background"}],"outputs":[{"name":"positive","type":"CONDITIONING","links":[281],"slot_index":0,"shape":3,"label":"positive"},{"name":"negative","type":"CONDITIONING","links":[162],"slot_index":1,"shape":3,"label":"negative"},{"name":"empty_latent","type":"LATENT","links":[],"slot_index":2,"shape":3,"label":"empty_latent"}],"properties":{"Node name for S&R":"ICLightConditioning","ttNbgOverride":{"color":"#1f1f48","groupcolor":"#88A"}},"widgets_values":[0.25],"color":"#1f1f48"},{"id":37,"type":"LoadAndApplyICLightUnet","pos":{"0":634.3388671875,"1":135.0350799560547},"size":{"0":375.0868225097656,"1":58},"flags":{},"order":5,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":271,"label":"model"}],"outputs":[{"name":"MODEL","type":"MODEL","links":[283],"slot_index":0,"shape":3,"label":"MODEL"}],"properties":{"Node name for S&R":"LoadAndApplyICLightUnet"},"widgets_values":["IC-Light\iclight_sd15_fc.safetensors"],"color":"#346434"}],"links":[[4,2,1,4,0,"CLIP"],[5,2,1,5,0,"CLIP"],[77,19,0,7,0,"LATENT"],[152,22,0,42,0,"MASK"],[154,42,0,43,0,"MASK"],[155,43,0,35,0,"MASK"],[157,4,0,44,0,"CONDITIONING"],[158,5,0,44,1,"CONDITIONING"],[160,45,0,44,3,"LATENT"],[162,44,1,19,2,"CONDITIONING"],[169,35,0,41,0,"IMAGE"],[171,2,2,50,0,"VAE"],[173,51,0,22,0,"INT"],[174,51,1,22,1,"INT"],[187,7,0,60,1,"IMAGE"],[191,63,0,60,0,"IMAGE"],[192,62,0,63,0,""],[250,9,0,99,0,"IMAGE"],[252,99,0,51,0,"IMAGE"],[253,51,0,100,0,"INT,FLOAT,IMAGE,LATENT"],[254,100,0,22,2,"INT"],[255,51,1,101,0,"INT,FLOAT,IMAGE,LATENT"],[256,101,0,22,3,"INT"],[257,99,0,45,0,"IMAGE"],[259,7,0,103,0,"IMAGE"],[268,35,0,111,0,"IMAGE"],[269,111,0,19,3,"LATENT"],[270,99,0,62,0,""],[271,2,0,37,0,"MODEL"],[277,57,0,19,0,"MODEL"],[281,44,0,19,1,"CONDITIONING"],[283,37,0,57,0,"*"]],"groups":[{"title":"Group ♾️Mixlab","bounding":[0,0,0,0],"color":"#3f789e","font_size":24,"flags":{}},{"title":"IC Light 新组件","bounding":[580,41,469,447],"color":"#3f789e","font_size":24,"flags":{}},{"title":"这么大一堆,只为了创建一个小圆球","bounding":[90,750,1013,692],"color":"#a1309b","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":1.4122927695244953,"offset":[108.65545616594284,115.63580401806543]},"workspace_info":{"id":"104db54e-ab3d-4f68-9077-f5f4e8a41c6b"}},"version":0.4}



## Additional Context
(Please add any additional context or steps to reproduce the error here)
Lilien86 commented 1 month ago

hey, I have the same issue

huchenlei commented 1 week ago

Wrong repo. Please file to https://github.com/kijai/ComfyUI-IC-Light