Open HaydenReeve opened 7 months ago
@HaydenReeve Hi!
Unfortunately, I can't reproduce this error in my example workflow. Could you provide your certain workflow?
Sizes of tensors must match except in dimension 0. Expected size 154 but got size 77 for tensor number 1 in the list.
Hello, I also encountered this issue, this is my workflow.
张量的大小必须匹配,但维度 0 除外。预期大小为 154,但列表中张量数 1 的大小为 77。
This example can more intuitively see that the upper workflow (with many prompt words) failed, but the lower workflow (with fewer prompt words) succeeded.
Same issue here I'm afraid
Hi!
You could try to checkout branch, and then check if it fixes the issue.
cd path-to-comfyui-parent/ComfyUI/custom_nodes/ComfyUI-ComfyCouple
git fetch
git checkout fix/prompts-size-difference-failure
Hey there 👋
I just pulled down the latest changes to try this again, and I've managed to trigger the same issue.
# ComfyUI Error Report ## Error Details - **Node Type:** SamplerCustom - **Exception Type:** RuntimeError - **Exception Message:** The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 ## Stack Trace ``` File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ ``` ## System Information - **ComfyUI Version:** v0.2.3-3-g6632365 - **Arguments:** E:\AI\ComfyUI\main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest - **OS:** nt - **Python Version:** 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.4.0+cu121 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 25756696576 - **VRAM Free:** 14511371458 - **Torch VRAM Total:** 9529458688 - **Torch VRAM Free:** 36828354 ## Logs ``` 2024-10-12 20:10:21,592 - root - INFO - Total VRAM 24564 MB, total RAM 64729 MB 2024-10-12 20:10:21,592 - root - INFO - pytorch version: 2.4.0+cu121 2024-10-12 20:10:21,593 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-12 20:10:21,593 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync 2024-10-12 20:10:22,064 - root - INFO - Using pytorch cross attention 2024-10-12 20:10:23,281 - root - INFO - [Prompt Server] web root: E:\AI\ComfyUI\web_custom_versions\Comfy-Org_ComfyUI_frontend\1.3.18 2024-10-12 20:10:24,273 - albumentations.check_version - INFO - A new version of Albumentations is available: 1.4.18 (you have 1.4.13). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1. 2024-10-12 20:10:24,656 - root - INFO - Import times for custom nodes: 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\websocket_image_save.py 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Better-Numbers 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Better-Strings 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\Skimmed_CFG 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\SDXL_sizing 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-GGUF 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-workspace-manager 2024-10-12 20:10:24,657 - root - INFO - 0.3 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 2024-10-12 20:10:24,657 - root - INFO - 0.7 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Inspyrenet-Rembg 2024-10-12 20:10:24,657 - root - INFO - 2024-10-12 20:10:24,662 - root - INFO - Starting server 2024-10-12 20:10:24,662 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-10-12 20:17:16,216 - root - INFO - got prompt 2024-10-12 20:17:16,896 - root - ERROR - Failed to validate prompt for output 232: 2024-10-12 20:17:16,896 - root - ERROR - * BNK_AddCLIPSDXLParams 533: 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: crop_h 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: target_height 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: width 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: target_width 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: height 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: crop_w 2024-10-12 20:17:16,896 - root - ERROR - Output will be ignored 2024-10-12 20:17:16,896 - root - WARNING - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} 2024-10-12 20:17:41,361 - root - INFO - got prompt 2024-10-12 20:17:42,090 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:42,090 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:42,362 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-12 20:17:42,363 - root - INFO - model_type EPS 2024-10-12 20:17:44,607 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:44,608 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:44,871 - root - INFO - Requested to load SDXLClipModel 2024-10-12 20:17:44,871 - root - INFO - Loading 1 new model 2024-10-12 20:17:44,877 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 20:17:46,873 - root - INFO - Requested to load SDXLClipModel 2024-10-12 20:17:46,873 - root - INFO - Loading 1 new model 2024-10-12 20:17:48,106 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 20:17:48,359 - root - INFO - Requested to load SDXL 2024-10-12 20:17:48,359 - root - INFO - Loading 1 new model 2024-10-12 20:17:50,879 - root - INFO - loaded completely 0.0 4897.0483474731445 True 2024-10-12 20:17:57,518 - root - INFO - Requested to load AutoencoderKL 2024-10-12 20:17:57,518 - root - INFO - Loading 1 new model 2024-10-12 20:17:57,538 - root - INFO - loaded completely 0.0 159.55708122253418 True 2024-10-12 20:17:57,871 - root - INFO - Prompt executed in 15.97 seconds 2024-10-12 20:18:15,116 - root - INFO - got prompt 2024-10-12 20:18:24,134 - root - INFO - Prompt executed in 8.49 seconds 2024-10-12 20:19:49,614 - root - INFO - got prompt 2024-10-12 20:19:50,429 - root - INFO - Requested to load SDXL 2024-10-12 20:19:50,429 - root - INFO - Loading 1 new model 2024-10-12 20:19:59,012 - root - INFO - Prompt executed in 8.84 seconds 2024-10-12 20:20:40,114 - root - INFO - got prompt 2024-10-12 20:20:50,670 - root - INFO - Requested to load ControlNet 2024-10-12 20:20:50,671 - root - INFO - Loading 1 new model 2024-10-12 20:20:50,890 - root - INFO - loaded completely 0.0 2386.120147705078 True 2024-10-12 20:20:50,992 - root - ERROR - !!! Exception during processing !!! The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:20:50,997 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:20:50,999 - root - INFO - Prompt executed in 10.33 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)
# ComfyUI Error Report ## Error Details - **Node Type:** SamplerCustom - **Exception Type:** RuntimeError - **Exception Message:** The size of tensor a (7906) must match the size of tensor b (464) at non-singleton dimension 1 ## Stack Trace ``` File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ ``` ## System Information - **ComfyUI Version:** v0.2.3-3-g6632365 - **Arguments:** E:\AI\ComfyUI\main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest - **OS:** nt - **Python Version:** 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.4.0+cu121 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 25756696576 - **VRAM Free:** 14507263170 - **Torch VRAM Total:** 9563013120 - **Torch VRAM Free:** 70468802 ## Logs ``` 2024-10-12 20:10:21,592 - root - INFO - Total VRAM 24564 MB, total RAM 64729 MB 2024-10-12 20:10:21,592 - root - INFO - pytorch version: 2.4.0+cu121 2024-10-12 20:10:21,593 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-12 20:10:21,593 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync 2024-10-12 20:10:22,064 - root - INFO - Using pytorch cross attention 2024-10-12 20:10:23,281 - root - INFO - [Prompt Server] web root: E:\AI\ComfyUI\web_custom_versions\Comfy-Org_ComfyUI_frontend\1.3.18 2024-10-12 20:10:24,273 - albumentations.check_version - INFO - A new version of Albumentations is available: 1.4.18 (you have 1.4.13). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1. 2024-10-12 20:10:24,656 - root - INFO - Import times for custom nodes: 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\websocket_image_save.py 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Better-Numbers 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Better-Strings 2024-10-12 20:10:24,656 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\Skimmed_CFG 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\SDXL_sizing 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-GGUF 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux 2024-10-12 20:10:24,657 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-workspace-manager 2024-10-12 20:10:24,657 - root - INFO - 0.3 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 2024-10-12 20:10:24,657 - root - INFO - 0.7 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Inspyrenet-Rembg 2024-10-12 20:10:24,657 - root - INFO - 2024-10-12 20:10:24,662 - root - INFO - Starting server 2024-10-12 20:10:24,662 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-10-12 20:17:16,216 - root - INFO - got prompt 2024-10-12 20:17:16,896 - root - ERROR - Failed to validate prompt for output 232: 2024-10-12 20:17:16,896 - root - ERROR - * BNK_AddCLIPSDXLParams 533: 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: crop_h 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: target_height 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: width 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: target_width 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: height 2024-10-12 20:17:16,896 - root - ERROR - - Required input is missing: crop_w 2024-10-12 20:17:16,896 - root - ERROR - Output will be ignored 2024-10-12 20:17:16,896 - root - WARNING - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}} 2024-10-12 20:17:41,361 - root - INFO - got prompt 2024-10-12 20:17:42,090 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:42,090 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:42,362 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-12 20:17:42,363 - root - INFO - model_type EPS 2024-10-12 20:17:44,607 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:44,608 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:17:44,871 - root - INFO - Requested to load SDXLClipModel 2024-10-12 20:17:44,871 - root - INFO - Loading 1 new model 2024-10-12 20:17:44,877 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 20:17:46,873 - root - INFO - Requested to load SDXLClipModel 2024-10-12 20:17:46,873 - root - INFO - Loading 1 new model 2024-10-12 20:17:48,106 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 20:17:48,359 - root - INFO - Requested to load SDXL 2024-10-12 20:17:48,359 - root - INFO - Loading 1 new model 2024-10-12 20:17:50,879 - root - INFO - loaded completely 0.0 4897.0483474731445 True 2024-10-12 20:17:57,518 - root - INFO - Requested to load AutoencoderKL 2024-10-12 20:17:57,518 - root - INFO - Loading 1 new model 2024-10-12 20:17:57,538 - root - INFO - loaded completely 0.0 159.55708122253418 True 2024-10-12 20:17:57,871 - root - INFO - Prompt executed in 15.97 seconds 2024-10-12 20:18:15,116 - root - INFO - got prompt 2024-10-12 20:18:24,134 - root - INFO - Prompt executed in 8.49 seconds 2024-10-12 20:19:49,614 - root - INFO - got prompt 2024-10-12 20:19:50,429 - root - INFO - Requested to load SDXL 2024-10-12 20:19:50,429 - root - INFO - Loading 1 new model 2024-10-12 20:19:59,012 - root - INFO - Prompt executed in 8.84 seconds 2024-10-12 20:20:40,114 - root - INFO - got prompt 2024-10-12 20:20:50,670 - root - INFO - Requested to load ControlNet 2024-10-12 20:20:50,671 - root - INFO - Loading 1 new model 2024-10-12 20:20:50,890 - root - INFO - loaded completely 0.0 2386.120147705078 True 2024-10-12 20:20:50,992 - root - ERROR - !!! Exception during processing !!! The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:20:50,997 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:20:50,999 - root - INFO - Prompt executed in 10.33 seconds 2024-10-12 20:29:46,876 - root - INFO - got prompt 2024-10-12 20:29:47,713 - root - INFO - Requested to load SDXL 2024-10-12 20:29:47,713 - root - INFO - Loading 1 new model 2024-10-12 20:29:47,733 - root - ERROR - !!! Exception during processing !!! Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 1 in the list. 2024-10-12 20:29:47,734 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 120, in patch context_cond = torch.cat([cond for cond in self.negative_positive_conds[1]], dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 1 in the list. 2024-10-12 20:29:47,736 - root - INFO - Prompt executed in 0.30 seconds 2024-10-12 20:30:19,310 - root - INFO - got prompt 2024-10-12 20:30:19,883 - root - ERROR - !!! Exception during processing !!! Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 1 in the list. 2024-10-12 20:30:19,884 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 120, in patch context_cond = torch.cat([cond for cond in self.negative_positive_conds[1]], dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 77 but got size 154 for tensor number 1 in the list. 2024-10-12 20:30:19,887 - root - INFO - Prompt executed in 0.03 seconds 2024-10-12 20:30:46,585 - root - INFO - got prompt 2024-10-12 20:30:47,263 - root - INFO - Requested to load SDXL 2024-10-12 20:30:47,263 - root - INFO - Loading 1 new model 2024-10-12 20:30:55,030 - root - INFO - Prompt executed in 7.91 seconds 2024-10-12 20:31:05,169 - root - INFO - got prompt 2024-10-12 20:31:14,744 - root - ERROR - !!! Exception during processing !!! The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:31:14,746 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:31:14,748 - root - INFO - Prompt executed in 9.01 seconds 2024-10-12 20:31:23,056 - root - INFO - got prompt 2024-10-12 20:31:29,357 - root - INFO - got prompt 2024-10-12 20:31:33,828 - root - ERROR - !!! Exception during processing !!! The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:31:33,829 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:31:33,831 - root - INFO - Prompt executed in 10.23 seconds 2024-10-12 20:31:42,391 - root - INFO - Prompt executed in 8.36 seconds 2024-10-12 20:31:54,041 - root - INFO - got prompt 2024-10-12 20:32:03,215 - root - ERROR - !!! Exception during processing !!! The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:32:03,216 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (7979) must match the size of tensor b (475) at non-singleton dimension 1 2024-10-12 20:32:03,219 - root - INFO - Prompt executed in 8.58 seconds 2024-10-12 20:37:21,929 - root - INFO - got prompt 2024-10-12 20:37:22,502 - root - INFO - Requested to load SDXL 2024-10-12 20:37:22,502 - root - INFO - Loading 1 new model 2024-10-12 20:37:30,184 - root - INFO - Prompt executed in 7.72 seconds 2024-10-12 20:37:42,601 - root - INFO - got prompt 2024-10-12 20:37:52,934 - root - ERROR - !!! Exception during processing !!! The size of tensor a (7906) must match the size of tensor b (464) at non-singleton dimension 1 2024-10-12 20:37:52,936 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (7906) must match the size of tensor b (464) at non-singleton dimension 1 2024-10-12 20:37:52,938 - root - INFO - Prompt executed in 9.79 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)
It seems quite difficult to get a bead on exactly what is causing it.
I'm noting that changing a workflow from 4:3
to 3:4
triggered the first log, while upscaling a working prompt and output triggered the second log.
You can also instantly trigger on any workflow where you simply reduce one prompt to only a few characters and keep the other prompt a few paragraphs.
I'm going to also note here that if you multiple the conditioning area (upscale the conditioning as well as the latent) you can somewhat resolve the issue.
Unfortunately it doesn't fix the main issue of drastically different prompt lengths causing a crash. We don't even get to the upscale in this scenario:
# ComfyUI Error Report ## Error Details - **Node Type:** SamplerCustom - **Exception Type:** RuntimeError - **Exception Message:** Sizes of tensors must match except in dimension 0. Expected size 77 but got size 231 for tensor number 1 in the list. ## Stack Trace ``` File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 120, in patch context_cond = torch.cat([cond for cond in self.negative_positive_conds[1]], dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` ## System Information - **ComfyUI Version:** v0.2.3-3-g6632365 - **Arguments:** E:\AI\ComfyUI\main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest - **OS:** nt - **Python Version:** 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] - **Embedded Python:** false - **PyTorch Version:** 2.4.0+cu121 ## Devices - **Name:** cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync - **Type:** cuda - **VRAM Total:** 25756696576 - **VRAM Free:** 14512908354 - **Torch VRAM Total:** 10234101760 - **Torch VRAM Free:** 749299778 ## Logs ``` 2024-10-12 20:46:43,812 - root - INFO - Total VRAM 24564 MB, total RAM 64729 MB 2024-10-12 20:46:43,812 - root - INFO - pytorch version: 2.4.0+cu121 2024-10-12 20:46:43,813 - root - INFO - Set vram state to: NORMAL_VRAM 2024-10-12 20:46:43,813 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync 2024-10-12 20:46:44,294 - root - INFO - Using pytorch cross attention 2024-10-12 20:46:45,543 - root - INFO - [Prompt Server] web root: E:\AI\ComfyUI\web_custom_versions\Comfy-Org_ComfyUI_frontend\1.3.18 2024-10-12 20:46:46,486 - albumentations.check_version - INFO - A new version of Albumentations is available: 1.4.18 (you have 1.4.13). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1. 2024-10-12 20:46:46,832 - root - INFO - Import times for custom nodes: 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\websocket_image_save.py 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Better-Numbers 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Better-Strings 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\SDXL_sizing 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\Skimmed_CFG 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-GGUF 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui_controlnet_aux 2024-10-12 20:46:46,832 - root - INFO - 0.0 seconds: E:\AI\ComfyUI\custom_nodes\comfyui-workspace-manager 2024-10-12 20:46:46,832 - root - INFO - 0.3 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 2024-10-12 20:46:46,832 - root - INFO - 0.7 seconds: E:\AI\ComfyUI\custom_nodes\ComfyUI-Inspyrenet-Rembg 2024-10-12 20:46:46,832 - root - INFO - 2024-10-12 20:46:46,836 - root - INFO - Starting server 2024-10-12 20:46:46,837 - root - INFO - To see the GUI go to: http://127.0.0.1:8188 2024-10-12 20:46:49,795 - root - INFO - got prompt 2024-10-12 20:46:50,830 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:46:50,831 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:46:51,110 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-12 20:46:51,110 - root - INFO - model_type EPS 2024-10-12 20:46:52,419 - root - INFO - got prompt 2024-10-12 20:46:53,660 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:46:53,661 - root - INFO - Using pytorch attention in VAE 2024-10-12 20:46:53,936 - root - INFO - Requested to load SDXLClipModel 2024-10-12 20:46:53,936 - root - INFO - Loading 1 new model 2024-10-12 20:46:53,943 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 20:46:55,758 - root - INFO - Requested to load SDXLClipModel 2024-10-12 20:46:55,758 - root - INFO - Loading 1 new model 2024-10-12 20:46:56,637 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 20:46:56,901 - root - INFO - Requested to load SDXL 2024-10-12 20:46:56,901 - root - INFO - Loading 1 new model 2024-10-12 20:46:58,619 - root - INFO - loaded completely 0.0 4897.0483474731445 True 2024-10-12 20:47:07,047 - root - INFO - Requested to load AutoencoderKL 2024-10-12 20:47:07,047 - root - INFO - Loading 1 new model 2024-10-12 20:47:07,072 - root - INFO - loaded completely 0.0 159.55708122253418 True 2024-10-12 20:47:07,415 - root - INFO - Prompt executed in 17.02 seconds 2024-10-12 20:47:08,989 - root - INFO - Requested to load ControlNet 2024-10-12 20:47:08,989 - root - INFO - Loading 1 new model 2024-10-12 20:47:09,234 - root - INFO - loaded completely 0.0 2386.120147705078 True 2024-10-12 20:47:09,330 - root - ERROR - !!! Exception during processing !!! The size of tensor a (10126) must match the size of tensor b (600) at non-singleton dimension 1 2024-10-12 20:47:09,333 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (10126) must match the size of tensor b (600) at non-singleton dimension 1 2024-10-12 20:47:09,336 - root - INFO - Prompt executed in 1.72 seconds 2024-10-12 20:48:20,145 - root - INFO - got prompt 2024-10-12 20:48:28,531 - root - ERROR - !!! Exception during processing !!! The size of tensor a (10126) must match the size of tensor b (600) at non-singleton dimension 1 2024-10-12 20:48:28,532 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 149, in patch qkv = qkv * masks ~~~~^~~~~~~ RuntimeError: The size of tensor a (10126) must match the size of tensor b (600) at non-singleton dimension 1 2024-10-12 20:48:28,538 - root - INFO - Prompt executed in 7.77 seconds 2024-10-12 20:51:42,282 - root - INFO - got prompt 2024-10-12 20:51:47,026 - root - INFO - got prompt 2024-10-12 20:51:52,748 - root - INFO - Prompt executed in 9.89 seconds 2024-10-12 20:51:52,994 - root - INFO - Requested to load SDXL 2024-10-12 20:51:52,994 - root - INFO - Loading 1 new model 2024-10-12 20:52:07,614 - root - INFO - Prompt executed in 14.68 seconds 2024-10-12 20:52:28,738 - root - INFO - got prompt 2024-10-12 20:52:29,436 - root - INFO - Requested to load SDXL 2024-10-12 20:52:29,436 - root - INFO - Loading 1 new model 2024-10-12 20:52:43,776 - root - INFO - Prompt executed in 14.48 seconds 2024-10-12 20:52:46,438 - root - INFO - got prompt 2024-10-12 20:52:47,309 - root - INFO - Requested to load SDXL 2024-10-12 20:52:47,309 - root - INFO - Loading 1 new model 2024-10-12 20:53:01,644 - root - INFO - Prompt executed in 14.62 seconds 2024-10-12 20:53:30,449 - root - INFO - got prompt 2024-10-12 20:53:31,120 - root - INFO - Requested to load SDXL 2024-10-12 20:53:31,120 - root - INFO - Loading 1 new model 2024-10-12 20:53:45,380 - root - INFO - Prompt executed in 14.36 seconds 2024-10-12 20:53:56,618 - root - INFO - got prompt 2024-10-12 20:53:57,313 - root - INFO - Requested to load SDXL 2024-10-12 20:53:57,313 - root - INFO - Loading 1 new model 2024-10-12 20:54:11,274 - root - INFO - Prompt executed in 14.08 seconds 2024-10-12 20:54:17,009 - root - INFO - got prompt 2024-10-12 20:54:32,032 - root - INFO - Prompt executed in 14.44 seconds 2024-10-12 20:54:33,029 - root - INFO - got prompt 2024-10-12 20:54:33,761 - root - INFO - Requested to load SDXL 2024-10-12 20:54:33,761 - root - INFO - Loading 1 new model 2024-10-12 20:54:47,129 - root - INFO - Prompt executed in 13.55 seconds 2024-10-12 20:56:34,395 - root - INFO - got prompt 2024-10-12 20:56:35,195 - root - INFO - Requested to load SDXL 2024-10-12 20:56:35,195 - root - INFO - Loading 1 new model 2024-10-12 20:56:48,455 - root - INFO - Prompt executed in 13.50 seconds 2024-10-12 20:57:15,717 - root - INFO - got prompt 2024-10-12 20:57:16,451 - root - INFO - Requested to load SDXL 2024-10-12 20:57:16,451 - root - INFO - Loading 1 new model 2024-10-12 20:57:29,941 - root - INFO - Prompt executed in 13.65 seconds 2024-10-12 20:58:45,766 - root - INFO - got prompt 2024-10-12 20:58:46,499 - root - INFO - Requested to load SDXL 2024-10-12 20:58:46,499 - root - INFO - Loading 1 new model 2024-10-12 20:58:56,320 - root - INFO - got prompt 2024-10-12 20:59:01,250 - root - INFO - Prompt executed in 14.94 seconds 2024-10-12 20:59:15,001 - root - INFO - Prompt executed in 13.53 seconds 2024-10-12 20:59:25,776 - root - INFO - got prompt 2024-10-12 20:59:26,719 - root - INFO - Requested to load SDXL 2024-10-12 20:59:26,719 - root - INFO - Loading 1 new model 2024-10-12 20:59:40,212 - root - INFO - Prompt executed in 13.88 seconds 2024-10-12 20:59:53,636 - root - INFO - got prompt 2024-10-12 20:59:54,508 - root - INFO - Requested to load SDXL 2024-10-12 20:59:54,508 - root - INFO - Loading 1 new model 2024-10-12 21:00:08,185 - root - INFO - Prompt executed in 13.97 seconds 2024-10-12 21:00:13,910 - root - INFO - got prompt 2024-10-12 21:00:14,592 - root - INFO - Requested to load SDXL 2024-10-12 21:00:14,592 - root - INFO - Loading 1 new model 2024-10-12 21:00:28,063 - root - INFO - Prompt executed in 13.63 seconds 2024-10-12 21:00:45,218 - root - INFO - got prompt 2024-10-12 21:00:45,917 - root - INFO - Requested to load SDXL 2024-10-12 21:00:45,917 - root - INFO - Loading 1 new model 2024-10-12 21:00:59,328 - root - INFO - Prompt executed in 13.56 seconds 2024-10-12 21:01:07,885 - root - INFO - got prompt 2024-10-12 21:01:22,150 - root - INFO - Prompt executed in 13.70 seconds 2024-10-12 21:01:57,124 - root - INFO - got prompt 2024-10-12 21:01:57,816 - root - INFO - Requested to load SDXL 2024-10-12 21:01:57,816 - root - INFO - Loading 1 new model 2024-10-12 21:02:11,405 - root - INFO - Prompt executed in 13.72 seconds 2024-10-12 21:05:28,207 - root - INFO - got prompt 2024-10-12 21:05:28,644 - root - ERROR - Failed to validate prompt for output 232: 2024-10-12 21:05:28,644 - root - ERROR - * CheckpointLoader|pysssss 496: 2024-10-12 21:05:28,644 - root - ERROR - - Custom validation failed for node: ckpt_name - Checkpoint not found: PDXL\ponyhk_ponySDXLV095c.safetensors 2024-10-12 21:05:28,644 - root - ERROR - Output will be ignored 2024-10-12 21:05:28,760 - root - INFO - Prompt executed in 0.11 seconds 2024-10-12 21:05:31,823 - root - INFO - got prompt 2024-10-12 21:05:32,341 - root - ERROR - Failed to validate prompt for output 232: 2024-10-12 21:05:32,341 - root - ERROR - * CheckpointLoader|pysssss 496: 2024-10-12 21:05:32,341 - root - ERROR - - Custom validation failed for node: ckpt_name - Checkpoint not found: PDXL\ponyhk_ponySDXLV095c.safetensors 2024-10-12 21:05:32,341 - root - ERROR - Output will be ignored 2024-10-12 21:05:32,362 - root - INFO - Prompt executed in 0.02 seconds 2024-10-12 21:06:54,749 - root - INFO - got prompt 2024-10-12 21:06:56,053 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-12 21:06:56,054 - root - INFO - model_type EPS 2024-10-12 21:06:58,744 - root - INFO - Using pytorch attention in VAE 2024-10-12 21:06:58,745 - root - INFO - Using pytorch attention in VAE 2024-10-12 21:06:59,006 - root - INFO - Requested to load SDXLClipModel 2024-10-12 21:06:59,006 - root - INFO - Loading 1 new model 2024-10-12 21:06:59,013 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 21:07:00,655 - root - INFO - Requested to load SDXLClipModel 2024-10-12 21:07:00,655 - root - INFO - Loading 1 new model 2024-10-12 21:07:01,454 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 21:07:01,612 - root - INFO - Requested to load SDXL 2024-10-12 21:07:01,612 - root - INFO - Loading 1 new model 2024-10-12 21:07:03,414 - root - INFO - loaded completely 0.0 4897.0483474731445 True 2024-10-12 21:07:48,555 - root - INFO - Prompt executed in 53.25 seconds 2024-10-12 21:08:53,157 - root - INFO - got prompt 2024-10-12 21:08:53,497 - root - INFO - model weight dtype torch.float16, manual cast: None 2024-10-12 21:08:53,497 - root - INFO - model_type EPS 2024-10-12 21:09:01,739 - root - INFO - Using pytorch attention in VAE 2024-10-12 21:09:01,739 - root - INFO - Using pytorch attention in VAE 2024-10-12 21:09:02,259 - root - INFO - Requested to load SDXLClipModel 2024-10-12 21:09:02,259 - root - INFO - Loading 1 new model 2024-10-12 21:09:02,266 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 21:09:04,708 - root - INFO - Requested to load SDXLClipModel 2024-10-12 21:09:04,708 - root - INFO - Loading 1 new model 2024-10-12 21:09:04,977 - root - INFO - Requested to load SDXL 2024-10-12 21:09:04,977 - root - INFO - Loading 1 new model 2024-10-12 21:09:05,571 - root - INFO - loaded completely 0.0 4897.0483474731445 True 2024-10-12 21:09:19,270 - root - INFO - loaded completely 14672.91426826477 2386.120147705078 True 2024-10-12 21:09:33,571 - root - INFO - got prompt 2024-10-12 21:09:51,039 - root - INFO - Prompt executed in 57.88 seconds 2024-10-12 21:09:51,491 - root - INFO - loaded completely 18879.2771900177 4897.0483474731445 True 2024-10-12 21:10:05,384 - root - INFO - Prompt executed in 14.09 seconds 2024-10-12 21:10:17,290 - root - INFO - got prompt 2024-10-12 21:10:20,342 - root - INFO - Processing interrupted 2024-10-12 21:10:20,343 - root - INFO - Prompt executed in 3.05 seconds 2024-10-12 21:12:02,499 - root - INFO - got prompt 2024-10-12 21:12:04,097 - root - INFO - Requested to load SDXLClipModel 2024-10-12 21:12:04,097 - root - INFO - Loading 1 new model 2024-10-12 21:12:04,911 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 21:12:05,099 - root - INFO - Requested to load SDXL 2024-10-12 21:12:05,099 - root - INFO - Loading 1 new model 2024-10-12 21:12:07,933 - root - INFO - loaded completely 0.0 4897.0483474731445 True 2024-10-12 21:12:21,880 - root - INFO - Prompt executed in 18.84 seconds 2024-10-12 21:12:29,609 - root - INFO - got prompt 2024-10-12 21:13:13,675 - root - INFO - Prompt executed in 43.50 seconds 2024-10-12 21:15:02,125 - root - INFO - got prompt 2024-10-12 21:15:02,763 - root - INFO - Requested to load SDXLClipModel 2024-10-12 21:15:02,763 - root - INFO - Loading 1 new model 2024-10-12 21:15:03,525 - root - INFO - loaded completely 0.0 1560.802734375 True 2024-10-12 21:15:03,687 - root - INFO - Requested to load SDXL 2024-10-12 21:15:03,687 - root - INFO - Loading 1 new model 2024-10-12 21:15:04,391 - root - INFO - loaded completely 9.5367431640625e+25 4897.0483474731445 True 2024-10-12 21:15:04,410 - root - ERROR - !!! Exception during processing !!! Sizes of tensors must match except in dimension 0. Expected size 77 but got size 231 for tensor number 1 in the list. 2024-10-12 21:15:04,411 - root - ERROR - Traceback (most recent call last): File "E:\AI\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "E:\AI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 476, in sample samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, noise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise_seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 45, in sample_center return SAMPLE(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 664, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 52, in sampling_function_patched out = comfy.samplers.calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 142, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 857, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 120, in patch context_cond = torch.cat([cond for cond in self.negative_positive_conds[1]], dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 77 but got size 231 for tensor number 1 in the list. 2024-10-12 21:15:04,415 - root - INFO - Prompt executed in 1.71 seconds ``` ## Attached Workflow Please make sure that workflow does not contain any sensitive information such as API keys or passwords. ``` Workflow too large. Please manually upload the workflow from local file system. ``` ## Additional Context (Please add any additional context or steps to reproduce the error here)
Hiya 👋
There's an issue where you feed in two differently sized prompts of any significant margin:
Trace
``` Error occurred when executing KSampler: Sizes of tensors must match except in dimension 0. Expected size 154 but got size 77 for tensor number 1 in the list. File "E:\AI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\nodes.py", line 1369, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\nodes.py", line 1339, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 22, in informative_sample raise e File "E:\AI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 705, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 610, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 548, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\k_diffusion\sampling.py", line 580, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 286, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 273, in forward return self.apply_model(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 270, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\nodes.py", line 16, in sampling_function_patched cond_pred, uncond_pred = comfy.samplers.calc_cond_uncond_batch(model, cond, uncond_, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\samplers.py", line 224, in calc_cond_uncond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\model_base.py", line 96, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 850, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 633, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\*\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-layerdiffuse\lib_layerdiffusion\attention_sharing.py", line 253, in forward return func(self, x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 460, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\diffusionmodules\util.py", line 191, in checkpoint return func(*inputs) ^^^^^^^^^^^^^ File "E:\AI\ComfyUI\comfy\ldm\modules\attention.py", line 557, in _forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI\custom_nodes\ComfyUI-ComfyCouple\attention_couple.py", line 120, in patch context_cond = torch.cat([cond for cond in self.negative_positive_conds[1]], dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ```
You can resolve this by putting
__________________,
to a significant enough length on the smaller prompt, but it is quite strange that it needs to be there.