kijai / ComfyUI-CogVideoXWrapper

983 stars 59 forks source link

Image condition latents required for I2V models #268

Open jeffreyrobeson opened 1 day ago

jeffreyrobeson commented 1 day ago

ComfyUI Error Report

Error Details

## System Information
- **ComfyUI Version:** v0.2.7
- **Arguments:** F:\aigc\Comfyui-Flux-nf4\ComfyUI\main.py --port 8169 --auto-launch --preview-method auto --disable-cuda-malloc
- **OS:** nt
- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
- **Embedded Python:** true
- **PyTorch Version:** 2.4.0+cu121
## Devices

- **Name:** cuda:0 NVIDIA GeForce RTX 4060 : cudaMallocAsync
  - **Type:** cuda
  - **VRAM Total:** 8585216000
  - **VRAM Free:** 968040628
  - **Torch VRAM Total:** 6375342080
  - **Torch VRAM Free:** 118608052

## Logs

2024-11-20 22:46:19,033 - root - INFO - Total VRAM 8188 MB, total RAM 32598 MB 2024-11-20 22:46:19,033 - root - INFO - pytorch version: 2.4.0+cu121 2024-11-20 22:46:19,034 - root - INFO - Set vram state to: NORMAL_VRAM 2024-11-20 22:46:19,034 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4060 : cudaMallocAsync 2024-11-20 22:46:21,173 - root - INFO - Using pytorch cross attention 2024-11-20 22:46:22,210 - root - INFO - [Prompt Server] web root: F:\aigc\Comfyui-Flux-nf4\ComfyUI\web 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path checkpoints F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/Stable-diffusion 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path configs F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/Stable-diffusion 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path vae F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/VAE 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path loras F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/Lora 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path loras F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/LyCORIS 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path upscale_models F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/ESRGAN 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path upscale_models F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/RealESRGAN 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path upscale_models F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/SwinIR 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path embeddings F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\embeddings 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path hypernetworks F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/hypernetworks 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path controlnet F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models/ControlNet 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path ipadapter F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models\ipadapter 2024-11-20 22:46:22,212 - root - INFO - Adding extra search path clip_vision F:\aigc\sd-webui-aki\sd-webui-aki-v4.6.1\models\clipvision 2024-11-20 22:46:22,574 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir SetUnionControlNetType' might differ from the native display name. 2024-11-20 22:46:22,574 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir CLIPTextEncodeFlux' might differ from the native display name. 2024-11-20 22:46:22,574 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir FluxGuidance' might differ from the native display name. 2024-11-20 22:46:22,575 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir IPAdapterModelLoader' might differ from the native display name. 2024-11-20 22:46:22,575 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir IPAdapterAdvanced' might differ from the native display name. 2024-11-20 22:46:22,576 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir IPAdapterStyleComposition' might differ from the native display name. 2024-11-20 22:46:22,577 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir UltimateSDUpscale' might differ from the native display name. 2024-11-20 22:46:22,579 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir MinusZoneChatGLM3TextEncode' might differ from the native display name. 2024-11-20 22:46:28,159 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir SamplerCustomAdvanced' might differ from the native display name. 2024-11-20 22:46:28,160 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir BasicGuider' might differ from the native display name. 2024-11-20 22:46:28,160 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir BasicScheduler' might differ from the native display name. 2024-11-20 22:46:28,160 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir DualCLIPLoader' might differ from the native display name. 2024-11-20 22:46:28,160 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir KSamplerSelect' might differ from the native display name. 2024-11-20 22:46:28,160 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir RandomNoise' might differ from the native display name. 2024-11-20 22:46:28,160 - bizyair.nodes_base - WARNING - Display name '☁️BizyAir InpaintModelConditioning' might differ from the native display name. 2024-11-20 22:46:31,611 - root - INFO - Total VRAM 8188 MB, total RAM 32598 MB 2024-11-20 22:46:31,611 - root - INFO - pytorch version: 2.4.0+cu121 2024-11-20 22:46:31,612 - root - INFO - Set vram state to: NORMAL_VRAM 2024-11-20 22:46:31,613 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 4060 : cudaMallocAsync 2024-11-20 22:46:33,161 - root - INFO - -------------- 2024-11-20 22:46:33,161 - root - INFO -  ### Mixlab Nodes: Loaded 2024-11-20 22:46:33,172 - root - INFO - ChatGPT.available True 2024-11-20 22:46:33,173 - root - INFO - editmask.available True 2024-11-20 22:46:33,390 - root - INFO - ClipInterrogator.available True 2024-11-20 22:46:33,489 - root - INFO - PromptGenerate.available True 2024-11-20 22:46:33,489 - root - INFO - ChinesePrompt.available True 2024-11-20 22:46:33,489 - root - INFO - RembgNode.available True 2024-11-20 22:46:33,871 - root - INFO - TripoSR.available 2024-11-20 22:46:33,872 - root - INFO - MiniCPMNode.available 2024-11-20 22:46:33,976 - root - INFO - Scenedetect.available 2024-11-20 22:46:34,047 - root - INFO - FishSpeech.available 2024-11-20 22:46:34,055 - root - INFO - SenseVoice.available 2024-11-20 22:46:34,272 - root - INFO - Whisper.available False 2024-11-20 22:46:34,279 - root - INFO - FalVideo.available 2024-11-20 22:46:34,279 - root - INFO -  --------------  2024-11-20 22:46:40,481 - root - WARNING - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\nodes.py", line 2012, in load_custom_node module_spec.loader.exec_module(module) File "", line 936, in exec_module File "", line 1073, in get_code File "", line 1130, in get_data FileNotFoundError: [Errno 2] No such file or directory: 'F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_InstantID\init.py'

2024-11-20 22:46:40,481 - root - WARNING - Cannot import F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_InstantID module for custom nodes: [Errno 2] No such file or directory: 'F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_InstantID\init.py' 2024-11-20 22:46:40,650 - root - INFO - Import times for custom nodes: 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\websocket_image_save.py 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_bitsandbytes_NF4 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\SD-Latent-Upscaler 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\SD-Latent-Interposer 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-BiRefNet-Hugo 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\cg-use-everywhere 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui_storydiffusion 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui_fk_server 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\Comfyui_CXH_joy_caption 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyLiterals 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds (IMPORT FAILED): F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_InstantID 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-Custom-Scripts 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui-various 2024-11-20 22:46:40,650 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_essentials 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-Frame-Interpolation 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-GGUF 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\rgthree-comfy 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\efficiency-nodes-comfyui 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\Comfyui-ergouzi-Nodes 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui-workspace-manager 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-KJNodes 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_Comfyroll_CustomNodes 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite 2024-11-20 22:46:40,651 - root - INFO - 0.0 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_MiniCPM-V-2_6-int4 2024-11-20 22:46:40,651 - root - INFO - 0.1 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\Comfyui-ergouzi-DGNJD 2024-11-20 22:46:40,651 - root - INFO - 0.1 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_LayerStyle 2024-11-20 22:46:40,651 - root - INFO - 0.1 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-Crystools 2024-11-20 22:46:40,651 - root - INFO - 0.2 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui_controlnet_aux 2024-11-20 22:46:40,651 - root - INFO - 0.3 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper 2024-11-20 22:46:40,651 - root - INFO - 0.4 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\clipseg.py 2024-11-20 22:46:40,651 - root - INFO - 0.5 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-Easy-Use 2024-11-20 22:46:40,651 - root - INFO - 0.5 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui-ollama 2024-11-20 22:46:40,651 - root - INFO - 0.5 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-Manager 2024-11-20 22:46:40,651 - root - INFO - 1.5 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUi-Ollama-YN 2024-11-20 22:46:40,651 - root - INFO - 1.7 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet 2024-11-20 22:46:40,651 - root - INFO - 2.1 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-Impact-Pack 2024-11-20 22:46:40,651 - root - INFO - 2.1 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\comfyui-mixlab-nodes 2024-11-20 22:46:40,651 - root - INFO - 2.3 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI_FaceAnalysis 2024-11-20 22:46:40,651 - root - INFO - 5.6 seconds: F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\BizyAir 2024-11-20 22:46:40,651 - root - INFO - 2024-11-20 22:46:40,668 - root - INFO - Starting server

2024-11-20 22:46:40,668 - root - INFO - To see the GUI go to: http://127.0.0.1:8169 2024-11-20 22:46:51,889 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:46:59,892 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:47:49,560 - root - INFO - got prompt 2024-11-20 22:48:15,925 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:49:04,000 - root - ERROR - !!! Exception during processing !!! Allocation on device 2024-11-20 22:49:04,048 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 263, in encode start_latents = vae.encode(start_image).latent_dist.sample(generator) ^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper return method(self, *args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl_cogvideox.py", line 1223, in encode h = self._encode(x) ^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl_cogvideox.py", line 1195, in _encode x_intermediate, conv_cache = self.encoder(x_intermediate, conv_cache=conv_cache) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl_cogvideox.py", line 799, in forward hidden_states, new_conv_cache[conv_cache_key] = down_block( ^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl_cogvideox.py", line 431, in forward hidden_states, new_conv_cache[conv_cache_key] = resnet( ^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl_cogvideox.py", line 308, in forward hidden_states, new_conv_cache["conv2"] = self.conv2(hidden_states, conv_cache=conv_cache.get("conv2")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\autoencoders\autoencoder_kl_cogvideox.py", line 134, in forward inputs = F.pad(inputs, padding_2d, mode="constant", value=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\functional.py", line 4552, in pad return torch._C._nn.pad(input, pad, mode, value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: Allocation on device

2024-11-20 22:49:04,049 - root - ERROR - Got an OOM, unloading all loaded models. 2024-11-20 22:49:06,535 - root - INFO - Prompt executed in 76.95 seconds 2024-11-20 22:50:16,038 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:50:46,048 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:51:16,055 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:51:33,492 - root - INFO - got prompt 2024-11-20 22:51:34,927 - ComfyUI-CogVideoXWrapper.utils - INFO - Encoded latents shape: torch.Size([1, 1, 16, 48, 84]) 2024-11-20 22:51:46,060 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connectionlost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:51:58,879 - root - INFO - Requested to load SD3ClipModel 2024-11-20 22:51:58,879 - root - INFO - Loading 1 new model 2024-11-20 22:51:58,881 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 22:52:10,787 - root - INFO - Unloading models for lowram load. 2024-11-20 22:52:12,261 - root - INFO - 1 models unloaded. 2024-11-20 22:52:12,261 - root - INFO - Loading 1 new model 2024-11-20 22:52:12,305 - root - INFO - loaded partially 174.5860717773437 165.693359375 0 2024-11-20 22:52:12,307 - root - INFO - loaded partially 174.5860717773437 165.693359375 0 2024-11-20 22:52:15,279 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Only one image conditioning frame received, img2vid 2024-11-20 22:52:15,279 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Context schedule disabled 2024-11-20 22:52:15,411 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Sampling 49 frames in 13 latent frames at 672x384 with 25 inference steps 2024-11-20 22:52:15,993 - root - ERROR - !!! Exception during processing !!! It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues. 2024-11-20 22:52:16,023 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 695, in process latents = model["pipe"]( ^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\pipeline_cogvideox.py", line 757, in call noise_pred = self.transformer( ^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\custom_cogvideox_transformer_3d.py", line 600, in forward hidden_states = self.patch_embed(encoder_hidden_states, hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\embeddings.py", line 114, in forward raise ValueError( ValueError: It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues.

2024-11-20 22:52:16,025 - root - INFO - Prompt executed in 42.48 seconds 2024-11-20 22:52:16,216 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:52:46,263 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:54:00,567 - root - INFO - got prompt 2024-11-20 22:54:00,684 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Only one image conditioning frame received, img2vid 2024-11-20 22:54:00,684 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Context schedule disabled 2024-11-20 22:54:00,697 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Sampling 49 frames in 13 latent frames at 672x384 with 10 inference steps 2024-11-20 22:54:00,969 - root - ERROR - !!! Exception during processing !!! It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues. 2024-11-20 22:54:00,971 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 695, in process latents = model["pipe"]( ^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\pipeline_cogvideox.py", line 757, in call noise_pred = self.transformer( ^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\custom_cogvideox_transformer_3d.py", line 600, in forward hidden_states = self.patch_embed(encoder_hidden_states, hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\embeddings.py", line 114, in forward raise ValueError( ValueError: It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues.

2024-11-20 22:54:00,972 - root - INFO - Prompt executed in 0.38 seconds 2024-11-20 22:54:16,283 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:54:20,334 - root - INFO - got prompt 2024-11-20 22:54:20,380 - root - ERROR - !!! Exception during processing !!! Error no file named config.json found in directory F:\aigc\Comfyui-Flux-nf4\ComfyUI\models\CogVideo\CogVideoX-5b-1.5. 2024-11-20 22:54:20,421 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\model_loading.py", line 215, in loadmodel transformer = CogVideoXTransformer3DModel.from_pretrained(base_path, subfolder=subfolder) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 640, in from_pretrained config, unused_kwargs, commit_hash = cls.load_config( ^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 373, in load_config raise EnvironmentError( OSError: Error no file named config.json found in directory F:\aigc\Comfyui-Flux-nf4\ComfyUI\models\CogVideo\CogVideoX-5b-1.5.

2024-11-20 22:54:20,423 - root - INFO - Prompt executed in 0.05 seconds 2024-11-20 22:55:46,307 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:56:46,332 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:57:21,305 - root - INFO - got prompt 2024-11-20 22:57:21,309 - root - ERROR - Failed to validate prompt for output 112: 2024-11-20 22:57:21,309 - root - ERROR - * CogVideoDecode 11: 2024-11-20 22:57:21,309 - root - ERROR - - Exception when validating inner node: tuple index out of range 2024-11-20 22:57:21,309 - root - ERROR - Output will be ignored 2024-11-20 22:57:21,332 - root - ERROR - Failed to validate prompt for output 44: 2024-11-20 22:57:21,332 - root - ERROR - Output will be ignored 2024-11-20 22:57:21,421 - root - WARNING - Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely. 2024-11-20 22:57:46,360 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 22:57:58,353 - root - ERROR - !!! Exception during processing !!! Unable to infer channel dimension format 2024-11-20 22:57:58,366 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\BizyAir\llm.py", line 158, in joycaption raise Exception(ret["message"]) Exception: Unable to infer channel dimension format

2024-11-20 22:57:58,367 - root - INFO - Prompt executed in 37.03 seconds 2024-11-20 22:58:03,752 - root - INFO - got prompt 2024-11-20 22:58:03,754 - root - ERROR - Failed to validate prompt for output 112: 2024-11-20 22:58:03,754 - root - ERROR - * CogVideoDecode 11: 2024-11-20 22:58:03,754 - root - ERROR - - Exception when validating inner node: tuple index out of range 2024-11-20 22:58:03,754 - root - ERROR - Output will be ignored 2024-11-20 22:58:03,779 - root - ERROR - Failed to validate prompt for output 44: 2024-11-20 22:58:03,779 - root - ERROR - Output will be ignored 2024-11-20 22:58:35,366 - root - INFO - Prompt executed in 31.59 seconds 2024-11-20 22:59:46,408 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:00:51,364 - root - INFO - got prompt 2024-11-20 23:01:16,434 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:01:46,454 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:02:16,464 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:02:19,213 - root - ERROR - !!! Exception during processing !!! Error no file named config.json found in directory F:\aigc\Comfyui-Flux-nf4\ComfyUI\models\CogVideo\CogVideoX-5b-1.5. 2024-11-20 23:02:19,214 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\model_loading.py", line 215, in loadmodel transformer = CogVideoXTransformer3DModel.from_pretrained(base_path, subfolder=subfolder) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\models\modeling_utils.py", line 640, in from_pretrained config, unused_kwargs, commit_hash = cls.load_config( ^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\python_embeded\Lib\site-packages\diffusers\configuration_utils.py", line 373, in load_config raise EnvironmentError( OSError: Error no file named config.json found in directory F:\aigc\Comfyui-Flux-nf4\ComfyUI\models\CogVideo\CogVideoX-5b-1.5.

2024-11-20 23:02:19,215 - root - INFO - Prompt executed in 87.82 seconds 2024-11-20 23:02:53,873 - root - INFO - got prompt 2024-11-20 23:03:46,486 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:04:56,136 - ComfyUI-CogVideoXWrapper.utils - INFO - Merging rank 256 LoRA weights from F:\aigc\Comfyui-Flux-nf4\ComfyUI\models\CogVideo\loras\orbit_left_lora_weights.safetensors with strength 0.8 2024-11-20 23:05:13,905 - ComfyUI-CogVideoXWrapper.utils - INFO - Encoded latents shape: torch.Size([1, 2, 16, 60, 90]) 2024-11-20 23:05:16,610 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:05:39,740 - root - INFO - Unloading models for lowram load. 2024-11-20 23:05:41,625 - root - INFO - 1 models unloaded. 2024-11-20 23:05:41,625 - root - INFO - Loading 1 new model 2024-11-20 23:05:41,642 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 23:05:46,618 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:06:16,206 - root - INFO - Unloading models for lowram load. 2024-11-20 23:06:16,628 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:06:17,851 - root - INFO - 1 models unloaded. 2024-11-20 23:06:17,851 - root - INFO - Loading 1 new model 2024-11-20 23:06:17,873 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 23:06:50,718 - root - ERROR - !!! Exception during processing !!! Image condition latents required for I2V models 2024-11-20 23:06:50,719 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 624, in process assert not supports_image_conds, "Image condition latents required for I2V models" ^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Image condition latents required for I2V models

2024-11-20 23:06:50,720 - root - INFO - Prompt executed in 236.82 seconds 2024-11-20 23:07:16,658 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:07:51,605 - root - INFO - got prompt 2024-11-20 23:08:15,754 - ComfyUI-CogVideoXWrapper.utils - INFO - Encoded latents shape: torch.Size([1, 1, 16, 60, 90]) 2024-11-20 23:08:16,686 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:08:17,849 - root - INFO - Unloading models for lowram load. 2024-11-20 23:08:19,382 - root - INFO - 1 models unloaded. 2024-11-20 23:08:19,382 - root - INFO - Loading 1 new model 2024-11-20 23:08:19,397 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 23:08:53,692 - root - INFO - Unloading models for lowram load. 2024-11-20 23:08:55,308 - root - INFO - 1 models unloaded. 2024-11-20 23:08:55,308 - root - INFO - Loading 1 new model 2024-11-20 23:08:55,323 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 23:09:16,717 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:09:28,205 - root - ERROR - !!! Exception during processing !!! Image condition latents required for I2V models 2024-11-20 23:09:28,205 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 624, in process assert not supports_image_conds, "Image condition latents required for I2V models" ^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Image condition latents required for I2V models

2024-11-20 23:09:28,207 - root - INFO - Prompt executed in 96.56 seconds 2024-11-20 23:09:46,731 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:10:16,736 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:10:46,743 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:11:57,000 - root - INFO - got prompt 2024-11-20 23:11:59,599 - root - INFO - got prompt 2024-11-20 23:12:13,830 - root - INFO - Unloading models for lowram load. 2024-11-20 23:12:15,400 - root - INFO - 1 models unloaded. 2024-11-20 23:12:15,400 - root - INFO - Loading 1 new model 2024-11-20 23:12:15,415 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 23:12:16,768 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:12:49,475 - root - INFO - Unloading models for lowram load. 2024-11-20 23:12:51,075 - root - INFO - 1 models unloaded. 2024-11-20 23:12:51,075 - root - INFO - Loading 1 new model 2024-11-20 23:12:51,095 - root - INFO - loaded partially 64.0 56.193359375 0 2024-11-20 23:13:17,873 - asyncio - ERROR - Exception in callback _ProactorBasePipeTransport._call_connection_lost(None) handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)> Traceback (most recent call last): File "asyncio\events.py", line 84, in _run File "asyncio\proactor_events.py", line 165, in _call_connection_lost ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。 2024-11-20 23:13:23,788 - root - ERROR - !!! Exception during processing !!! Image condition latents required for I2V models 2024-11-20 23:13:23,788 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 624, in process assert not supports_image_conds, "Image condition latents required for I2V models" ^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Image condition latents required for I2V models

2024-11-20 23:13:23,789 - root - INFO - Prompt executed in 86.76 seconds 2024-11-20 23:13:24,005 - root - ERROR - !!! Exception during processing !!! Image condition latents required for I2V models 2024-11-20 23:13:24,005 - root - ERROR - Traceback (most recent call last): File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\aigc\Comfyui-Flux-nf4\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 624, in process assert not supports_image_conds, "Image condition latents required for I2V models" ^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError: Image condition latents required for I2V models

2024-11-20 23:13:24,006 - root - INFO - Prompt executed in 0.02 seconds

Additional Context

(Please add any additional context or steps to reproduce the error here)

kijai commented 1 day ago

What exactly are you trying to do? It is as the error says: for image2video (I2V) models you need to input image_cond_latents, please refer to the example workflows how to use that.

antique-goo commented 1 day ago

CogVideoX-DXLora_Orbit_img2vid_workflow.json According to this error, I encountered the same error, this issue has not been resolved, request a fix

kijai commented 1 day ago

CogVideoX-DXLora_Orbit_img2vid_workflow.json According to this error, I encountered the same error, this issue has not been resolved, request a fix

This is not my workflow, I won't be supporting old ones, it's just too much work.

antique-goo commented 1 day ago

ok,thanks,i know

fatpandaria commented 18 hours ago

I am a noob who met this issue, I used other's workflow ,and see there is no image_cond_latents input for the cogvideo sampler and throw this error , how and what should I add the input? thanks in advance; -------updates------ I tried to drag the point and found there can be a empty latent image node, after add it , this error is away, but another come out, I will try to figure it out.

kijai commented 17 hours ago

I am a noob who met this issue, I used other's workflow ,and see there is no image_cond_latents input for the cogvideo sampler and throw this error , how and what should I add the input? thanks in advance; -------updates------ I tried to drag the point and found there can be a empty latent image node, after add it , this error is away, but another come out, I will try to figure it out.

Sounds like old workflow, please try the included workflows in the examples folder, as there was a big update and lots of things changed.