kijai / ComfyUI-VEnhancer

61 stars 2 forks source link

error report #2

Open T8mars opened 3 months ago

T8mars commented 3 months ago

image

leetraman822 commented 3 months ago

same...😔

kijai commented 3 months ago

Didn't face that myself, maybe something to do with the open-clip-torch version? Did you install requirements.txt?

leetraman822 commented 3 months ago

Didn't face that myself, maybe something to do with the open-clip-torch version? Did you install requirements.txt?

Yes, I have installed all the content of requirements.txt, and there is no error. I have checked the installed open-clip-torch version is 2.26.1. Here's my comfyui from starting the workflow to reporting error.

Loading model from: F:\StableDiffusion\ComfyUI\models\venhancer\venhancer_paper-fp16.safetensors leftover_keys Loaded ViT-H-14 model config. Loading pretrained ViT-H-14 weights (laion2b_s32b_b79k). Build encoder with FrozenOpenCLIPEmbedder Build diffusion with GaussianDiffusion !!! Exception during processing !!! The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (1, 1). Traceback (most recent call last): File "F:\StableDiffusion\ComfyUI\execution.py", line 316, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\execution.py", line 191, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\execution.py", line 164, in _map_node_over_list process_inputs({}) File "F:\StableDiffusion\ComfyUI\execution.py", line 157, in process_inputs results.append(getattr(obj, func)(inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\custom_nodes\ComfyUI-VEnhancer\nodes.py", line 79, in loadmodel self.model = VideoToVideo(generator) ^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\custom_nodes\ComfyUI-VEnhancer\video_to_video\video_to_video_model.py", line 64, in init negative_y = clip_encoder(self.negative_prompt).detach() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\custom_nodes\ComfyUI-VEnhancer\video_to_video\modules\embedder.py", line 51, in forward z = self.encode_with_transformer(tokens.to(self.device)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\custom_nodes\ComfyUI-VEnhancer\video_to_video\modules\embedder.py", line 58, in encode_with_transformer x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\StableDiffusion\ComfyUI\custom_nodes\ComfyUI-VEnhancer\video_to_video\modules\embedder.py", line 71, in text_transformer_forward x = r(x, attn_mask=attn_mask) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\open_clip\transformer.py", line 263, in forward x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\open_clip\transformer.py", line 250, in attention return self.attn( ^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\modules\activation.py", line 1275, in forward attn_output, attn_output_weights = F.multi_head_attention_forward( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Administrator\anaconda3\envs\comfyui\Lib\site-packages\torch\nn\functional.py", line 5438, in multi_head_attention_forward raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.") RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (1, 1).

kijai commented 3 months ago

Then it might be one of the packages that doesn't have version specified, like torch or transformers version.

kunamin commented 3 months ago

I solved the issue by running pip install "open-clip-torch==2.24.0".

al3dv2 commented 2 months ago

How you install open-clip-torch==2.24.0 on the portable version of Comfyui ? pls