BadCafeCode / masquerade-nodes-comfyui

A powerful set of mask-related nodes for ComfyUI
MIT License
348 stars 34 forks source link

OutOfMemoryError: Allocation on device 0 would exceed allowed memory. #19

Open brechtdecock opened 9 months ago

brechtdecock commented 9 months ago

using the text To Mask node, i get the following error when trying to combine prompts with the pipe | symbol

Works: hat, shoes, jacket

Does not work: hat | shoes | jacket

line 63, in forward_multihead_attention attn_output_weights = torch.softmax(attn_output_weights, dim=-1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 15.58 GiB Requested : 4.45 GiB Device limit : 12.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

baselqt commented 8 months ago

same here

segalinc commented 7 months ago

After updating ComfyUI memory blows up showing it requires 189GB?

marcsyp commented 6 months ago

+1, tried to allocate 87GB, Mask By Text not working for me either.

SphaeroX commented 6 months ago
ERROR:root:Traceback (most recent call last):
  File "C:\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\ComfyUI\custom_nodes\masquerade-nodes-comfyui\MaskNodes.py", line 157, in get_mask
    preds = model(img.repeat(len(prompts), 1, 1, 1), dup_prompts)[0]
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\clipseg\clipseg.py", line 362, in forward
    visual_q, activations, _ = self.visual_forward(x_inp, extract_layers=[0] + list(self.extract_layers))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\clipseg\clipseg.py", line 172, in visual_forward
    x, aff_per_head = forward_multihead_attention(x, res_block, with_aff=True, attn_mask=attn_mask)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python311\Lib\site-packages\clipseg\clipseg.py", line 63, in forward_multihead_attention
    attn_output_weights = torch.softmax(attn_output_weights, dim=-1)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Currently allocated     : 8.52 GiB
Requested               : 3.80 GiB
Device limit            : 11.99 GiB
Free (according to CUDA): 0 bytes
PyTorch limit (set by user-supplied memory fraction)
                        : 17179869184.00 GiB

Prompt executed in 20.19 seconds
time-river commented 5 months ago

+1. I think the reason is free memory is not timely

VLevithan commented 5 months ago

+1 image

zd1990 commented 5 months ago

can add a "Rebatch Images" node to resovle this issue

kurt83340 commented 4 months ago

+1 same issue TT

gabrielhrr commented 2 months ago

can add a "Rebatch Images" node to resovle this issue

doesn't work