kijai / ComfyUI-CogVideoXWrapper

989 stars 59 forks source link

Fastercache not working w/CogVid 1.5 #220

Closed gitanon112 closed 1 week ago

gitanon112 commented 1 week ago

Torch tensor mismatch errors when trying to use fastercache with CogVid 1.5(using 1.5_test branch), have messed around w/settings a bunch, still no luck. Is this expected/still in dev right now? Thanks and lmk if any additional info needed:

# ComfyUI Error Report
## Error Details
- **Node Type:** CogVideoSampler
- **Exception Type:** RuntimeError
- **Exception Message:** The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 1
## Stack Trace

  File "/workspace/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/workspace/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/workspace/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/workspace/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/nodes.py", line 903, in process
    latents = pipeline["pipe"](
              ^^^^^^^^^^^^^^^^^

  File "/opt/environments/python/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^

  File "/workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/pipeline_cogvideox.py", line 884, in __call__
    noise_pred = self.transformer(
                 ^^^^^^^^^^^^^^^^^

  File "/opt/environments/python/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/opt/environments/python/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/custom_cogvideox_transformer_3d.py", line 651, in forward
    new_hf_uc = self.delta_hf + hf_c
                ~~~~~~~~~~~~~~^~~~~~

System Information

Logs

2024-11-12 02:15:43,933 - root - DEBUG - Trying to load custom node /workspace/ComfyUI/custom_nodes/rgthree-comfy
2024-11-12 02:15:43,939 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/checkpoints
2024-11-12 02:15:43,939 - root - DEBUG - found 1 files
2024-11-12 02:15:43,947 - root - DEBUG - Trying to load custom node /workspace/ComfyUI/custom_nodes/ComfyUI-MochiEdit
2024-11-12 02:15:43,950 - root - DEBUG - Trying to load custom node /workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper
2024-11-12 02:15:44,220 - ComfyUI-CogVideoXWrapper.cogvideox_fun.transformer_3d - INFO - Using sageattn
2024-11-12 02:15:44,221 - ComfyUI-CogVideoXWrapper.cogvideox_fun.fun_pab_transformer_3d - INFO - Using sageattn
2024-11-12 02:15:44,225 - root - INFO - 
Import times for custom nodes:
2024-11-12 02:15:44,225 - root - INFO -    0.0 seconds: /workspace/ComfyUI/custom_nodes/websocket_image_save.py
2024-11-12 02:15:44,225 - root - INFO -    0.0 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-MochiEdit
2024-11-12 02:15:44,225 - root - INFO -    0.0 seconds: /workspace/ComfyUI/custom_nodes/rgthree-comfy
2024-11-12 02:15:44,225 - root - INFO -    0.0 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI_essentials
2024-11-12 02:15:44,225 - root - INFO -    0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-KJNodes
2024-11-12 02:15:44,225 - root - INFO -    0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager
2024-11-12 02:15:44,225 - root - INFO -    0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Crystools
2024-11-12 02:15:44,225 - root - INFO -    0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-MochiWrapper
2024-11-12 02:15:44,225 - root - INFO -    0.3 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper
2024-11-12 02:15:44,225 - root - INFO -    0.3 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite
2024-11-12 02:15:44,225 - root - INFO - 
2024-11-12 02:15:44,236 - root - INFO - Starting server

2024-11-12 02:15:44,236 - root - INFO - To see the GUI go to: http://127.0.0.1:18188
2024-11-12 02:16:30,619 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/vae
2024-11-12 02:16:30,619 - root - DEBUG - found 5 files
2024-11-12 02:16:30,620 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/vae_approx
2024-11-12 02:16:30,620 - root - DEBUG - found 1 files
2024-11-12 02:16:30,621 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/loras
2024-11-12 02:16:30,621 - root - DEBUG - found 1 files
2024-11-12 02:16:30,621 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/text_encoders
2024-11-12 02:16:30,621 - root - DEBUG - found 1 files
2024-11-12 02:16:30,621 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/clip
2024-11-12 02:16:30,622 - root - DEBUG - found 5 files
2024-11-12 02:16:30,622 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/unet
2024-11-12 02:16:30,622 - root - DEBUG - found 1 files
2024-11-12 02:16:30,622 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/diffusion_models
2024-11-12 02:16:30,623 - root - DEBUG - found 5 files
2024-11-12 02:16:30,623 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/controlnet
2024-11-12 02:16:30,623 - root - DEBUG - found 1 files
2024-11-12 02:16:30,623 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/style_models
2024-11-12 02:16:30,624 - root - DEBUG - found 1 files
2024-11-12 02:16:30,624 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/clip_vision
2024-11-12 02:16:30,624 - root - DEBUG - found 1 files
2024-11-12 02:16:30,624 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/gligen
2024-11-12 02:16:30,624 - root - DEBUG - found 1 files
2024-11-12 02:16:30,624 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/configs
2024-11-12 02:16:30,625 - root - DEBUG - found 11 files
2024-11-12 02:16:30,625 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/hypernetworks
2024-11-12 02:16:30,625 - root - DEBUG - found 1 files
2024-11-12 02:16:30,625 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/upscale_models
2024-11-12 02:16:30,626 - root - DEBUG - found 1 files
2024-11-12 02:16:30,628 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/photomaker
2024-11-12 02:16:30,628 - root - DEBUG - found 1 files
2024-11-12 02:16:30,630 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/custom_nodes/ComfyUI-KJNodes/fonts
2024-11-12 02:16:30,630 - root - DEBUG - found 3 files
2024-11-12 02:16:30,633 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/custom_nodes/ComfyUI-KJNodes/intrinsic_loras
2024-11-12 02:16:30,633 - root - DEBUG - found 5 files
2024-11-12 02:16:30,634 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/custom_nodes/ComfyUI-VideoHelperSuite/videohelpersuite/../video_formats
2024-11-12 02:16:30,635 - root - DEBUG - found 11 files
2024-11-12 02:16:30,640 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/custom_nodes/ComfyUI_essentials/luts
2024-11-12 02:16:30,640 - root - DEBUG - found 1 files
2024-11-12 02:16:30,642 - root - DEBUG - recursive file list on directory /workspace/ComfyUI/models/embeddings
2024-11-12 02:16:30,642 - root - DEBUG - found 1 files
2024-11-12 02:29:34,963 - root - INFO - got prompt
2024-11-12 02:29:34,993 - root - ERROR - Failed to validate prompt for output 33:
2024-11-12 02:29:34,993 - root - ERROR - * CogVideoSampler 34:
2024-11-12 02:29:34,993 - root - ERROR -   - Required input is missing: pipeline
2024-11-12 02:29:34,993 - root - ERROR - Output will be ignored
2024-11-12 02:29:34,993 - root - WARNING - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-11-12 02:30:18,619 - root - INFO - got prompt
2024-11-12 02:30:18,683 - root - DEBUG - Created SD3 text encoder with: clip_l False, clip_g False, t5xxl True:torch.float16
2024-11-12 02:30:18,916 - root - DEBUG - Model doesn't have a device attribute.
2024-11-12 02:30:18,919 - root - DEBUG - CLIP model load device: cuda:0, offload device: cpu, current: cpu
2024-11-12 02:30:19,091 - root - INFO - got prompt
2024-11-12 02:30:20,794 - root - DEBUG - clip unexpected: ['encoder.embed_tokens.weight']
2024-11-12 02:30:22,542 - root - INFO - Requested to load SD3ClipModel_
2024-11-12 02:30:22,542 - root - INFO - Loading 1 new model
2024-11-12 02:30:22,547 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.shared Embedding(32128, 4096)
2024-11-12 02:30:22,547 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,547 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,547 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,547 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,547 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,548 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,549 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,550 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,551 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,552 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,553 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False)
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.final_layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,554 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,555 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.layer_norm T5LayerNorm()
2024-11-12 02:30:22,556 - root - DEBUG - lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias Embedding(32, 64)
2024-11-12 02:30:22,559 - root - INFO - loaded completely 0.0 9083.38671875 True
2024-11-12 02:30:33,320 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Temporal tiling and context schedule disabled
2024-11-12 02:30:33,324 - ComfyUI-CogVideoXWrapper.pipeline_cogvideox - INFO - Sampling 53 frames in 13 latent frames at 1360x768 with 50 inference steps
2024-11-12 02:31:57,592 - root - ERROR - !!! Exception during processing !!! The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 1
2024-11-12 02:31:57,598 - root - ERROR - Traceback (most recent call last):
  File "/workspace/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/workspace/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/nodes.py", line 903, in process
    latents = pipeline["pipe"](
              ^^^^^^^^^^^^^^^^^
  File "/opt/environments/python/comfyui/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/pipeline_cogvideox.py", line 884, in __call__
    noise_pred = self.transformer(
                 ^^^^^^^^^^^^^^^^^
  File "/opt/environments/python/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/environments/python/comfyui/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/ComfyUI/custom_nodes/ComfyUI-CogVideoXWrapper/custom_cogvideox_transformer_3d.py", line 651, in forward
    new_hf_uc = self.delta_hf + hf_c
                ~~~~~~~~~~~~~~^~~~~~
RuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 1

2024-11-12 02:31:57,601 - root - INFO - Prompt executed in 98.96 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":44,"last_link_id":22,"nodes":[{"id":33,"type":"VHS_VideoCombine","pos":{"0":1977,"1":112},"size":[775.9083862304688,310],"flags":{},"order":12,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":15},{"name":"audio","type":"AUDIO","link":null,"shape":7},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"CogVideoX5B","format":"video/h264-mp4","pix_fmt":"yuv420p10le","crf":19,"save_metadata":true,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"CogVideoX5B_00006.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":16},"muted":false}}},{"id":36,"type":"CogVideoPABConfig","pos":{"0":100,"1":342},"size":{"0":315,"1":346},"flags":{},"order":0,"mode":4,"inputs":[],"outputs":[{"name":"pab_config","type":"PAB_CONFIG","links":[]}],"properties":{"Node name for S&R":"CogVideoPABConfig"},"widgets_values":[true,850,100,2,true,850,100,4,true,850,100,6,50]},{"id":38,"type":"CogVideoXTorchCompileSettings","pos":{"0":351,"1":727},"size":{"0":365.4000244140625,"1":154},"flags":{},"order":1,"mode":4,"inputs":[],"outputs":[{"name":"torch_compile_args","type":"COMPILEARGS","links":[],"slot_index":0}],"properties":{"Node name for S&R":"CogVideoXTorchCompileSettings"},"widgets_values":["inductor",false,"max-autotune",false,1024]},{"id":11,"type":"CogVideoDecode","pos":{"0":1618,"1":25},"size":{"0":315,"1":218},"flags":{},"order":11,"mode":0,"inputs":[{"name":"pipeline","type":"COGVIDEOPIPE","link":11},{"name":"samples","type":"LATENT","link":12},{"name":"vae_override","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"images","type":"IMAGE","links":[15]}],"properties":{"Node name for S&R":"CogVideoDecode"},"widgets_values":[false,384,680,0.3,0.3,false]},{"id":30,"type":"CogVideoTextEncode","pos":{"0":515,"1":130},"size":{"0":400,"1":200},"flags":{},"order":8,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":13}],"outputs":[{"name":"conditioning","type":"CONDITIONING","links":[21]},{"name":"clip","type":"CLIP","links":null}],"properties":{"Node name for S&R":"CogVideoTextEncode"},"widgets_values":["In the haunting backdrop of a warIn the haunting backdrop of a war-torn city, where ruins and crumbled walls tell a story of devastation, a poignant close-up frames a young girl. Her face is smudged with ash, a silent testament to the chaos around her. Her eyes glistening with a mix of sorrow and resilience, capturing the raw emotion of a world that has lost its innocence to the ravages of conflict.\n",0.7000000000000001,false]},{"id":31,"type":"CogVideoTextEncode","pos":{"0":503,"1":397},"size":{"0":400,"1":200},"flags":{},"order":9,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":14}],"outputs":[{"name":"conditioning","type":"CONDITIONING","links":[18]},{"name":"clip","type":"CLIP","links":null}],"properties":{"Node name for S&R":"CogVideoTextEncode"},"widgets_values":["unrealistic, saturated, high contrast, big nose, painting, drawing, sketch, cartoon, anime, manga, render, CG, 3d, watermark, signature, label, (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation",1,false]},{"id":42,"type":"MochiFasterCache","pos":{"0":1202.302001953125,"1":-312.0066833496094},"size":{"0":315,"1":130},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"fastercache","type":"FASTERCACHEARGS","links":null}],"properties":{"Node name for S&R":"MochiFasterCache"},"widgets_values":[10,22,28,"main_device"]},{"id":39,"type":"CogVideoXFasterCache","pos":{"0":722,"1":-211},"size":{"0":315,"1":130},"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"fastercache","type":"FASTERCACHEARGS","links":[20],"slot_index":0}],"properties":{"Node name for S&R":"CogVideoXFasterCache"},"widgets_values":[15,30,40,"main_device"]},{"id":20,"type":"CLIPLoader","pos":{"0":100,"1":130},"size":{"0":315,"1":82},"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[13,14]}],"properties":{"Node name for S&R":"CLIPLoader"},"widgets_values":["t5-sd3.5/text_encoders/t5xxl_fp16.safetensors","sd3"]},{"id":43,"type":"DownloadAndLoadCogVideoGGUFModel","pos":{"0":492,"1":734},"size":{"0":378,"1":222},"flags":{},"order":5,"mode":0,"inputs":[{"name":"pab_config","type":"PAB_CONFIG","link":null,"shape":7},{"name":"block_edit","type":"TRANSFORMERBLOCKS","link":null,"shape":7}],"outputs":[{"name":"cogvideo_pipe","type":"COGVIDEOPIPE","links":null}],"properties":{"Node name for S&R":"DownloadAndLoadCogVideoGGUFModel"},"widgets_values":["CogVideoX_5b_GGUF_Q4_0.safetensors","bf16",false,"main_device",false,"disabled","sdpa"]},{"id":34,"type":"CogVideoSampler","pos":{"0":1195,"1":-6},"size":{"0":405.5999755859375,"1":410},"flags":{},"order":10,"mode":0,"inputs":[{"name":"pipeline","type":"COGVIDEOPIPE","link":22},{"name":"positive","type":"CONDITIONING","link":21},{"name":"negative","type":"CONDITIONING","link":18},{"name":"samples","type":"LATENT","link":null,"shape":7},{"name":"image_cond_latents","type":"LATENT","link":null,"shape":7},{"name":"context_options","type":"COGCONTEXT","link":null,"shape":7},{"name":"controlnet","type":"COGVIDECONTROLNET","link":null,"shape":7},{"name":"tora_trajectory","type":"TORAFEATURES","link":null,"shape":7},{"name":"fastercache","type":"FASTERCACHEARGS","link":20,"shape":7}],"outputs":[{"name":"cogvideo_pipe","type":"COGVIDEOPIPE","links":[11]},{"name":"samples","type":"LATENT","links":[12]}],"properties":{"Node name for S&R":"CogVideoSampler"},"widgets_values":[768,1360,49,50,6,596820750311227,"randomize","DPM++",1]},{"id":1,"type":"DownloadAndLoadCogVideoModel","pos":{"0":923,"1":654},"size":{"0":330,"1":262},"flags":{},"order":6,"mode":0,"inputs":[{"name":"pab_config","type":"PAB_CONFIG","link":null,"shape":7},{"name":"block_edit","type":"TRANSFORMERBLOCKS","link":null,"shape":7},{"name":"lora","type":"COGLORA","link":null,"shape":7},{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7}],"outputs":[{"name":"cogvideo_pipe","type":"COGVIDEOPIPE","links":[22],"slot_index":0}],"properties":{"Node name for S&R":"DownloadAndLoadCogVideoModel"},"widgets_values":["kijai/CogVideoX-5b-1.5-T2V","bf16","disabled","disabled",false,"sdpa","main_device"]},{"id":44,"type":"CogVideoXFasterCache","pos":{"0":1603.6949462890625,"1":482.98773193359375},"size":{"0":315,"1":130},"flags":{},"order":7,"mode":0,"inputs":[],"outputs":[{"name":"fastercache","type":"FASTERCACHEARGS","links":null}],"properties":{"Node name for S&R":"CogVideoXFasterCache"},"widgets_values":[15,30,40,"main_device"]}],"links":[[11,34,0,11,0,"COGVIDEOPIPE"],[12,34,1,11,1,"LATENT"],[13,20,0,30,0,"CLIP"],[14,20,0,31,0,"CLIP"],[15,11,0,33,0,"IMAGE"],[18,31,0,34,2,"CONDITIONING"],[20,39,0,34,8,"FASTERCACHEARGS"],[21,30,0,34,1,"CONDITIONING"],[22,1,0,34,0,"COGVIDEOPIPE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.683013455365071,"offset":[-46.363774621319045,193.84279323205348]}},"version":0.4}

## Additional Context
(Please add any additional context or steps to reproduce the error here)
kijai commented 1 week ago

Fixed.

gitanon112 commented 1 week ago

Thanks, you're incredible!