cubiq / ComfyUI_InstantID

Apache License 2.0
945 stars 50 forks source link

torch.cuda.OutOfMemoryError on InstantID #73

Open bsflll opened 4 months ago

bsflll commented 4 months ago

"torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 826.09 MiB Requested : 5.00 MiB Device limit : 79.15 GiB Free (according to CUDA): 31.75 MiB PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB"

I did exactly what showed in the video, idk why this error appear on the InstantID controlNet.

bsflll commented 4 months ago

Error occurred when executing ApplyInstantID:

Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 830.23 MiB Requested : 1024.00 KiB Device limit : 79.15 GiB Free (according to CUDA): 25.75 MiB PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB

File "/home/christina/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "/home/christina/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "/home/christina/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) File "/home/christina/ComfyUI/custom_nodes/ComfyUI_InstantID/InstantID.py", line 472, in apply_instantid image_prompt_embeds, uncond_image_prompt_embeds = self.instantid.get_image_embeds(clip_embed.to(self.device, dtype=self.dtype), clip_embed_zeroed.to(self.device, dtype=self.dtype)) File "/home/christina/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/home/christina/ComfyUI/custom_nodes/ComfyUI_InstantID/InstantID.py", line 241, in get_image_embeds image_prompt_embeds = self.image_proj_model(clip_embed) File "/home/christina/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/christina/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "/home/christina/ComfyUI/custom_nodes/ComfyUI_InstantID/resampler.py", line 114, in forward x = self.proj_in(x) File "/home/christina/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/christina/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "/home/christina/.local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 116, in forward return F.linear(input, self.weight, self.bias)

cubiq commented 3 months ago

do you execute comfy with the high ram option? do you use the GPU for insightface?

this is something that I've seen a few times now but I don't have access to server GPUs so I'm not sure how to escalate the problem

louishane commented 3 months ago

do you execute comfy with the high ram option? do you use the GPU for insightface?

this is something that I've seen a few times now but I don't have access to server GPUs so I'm not sure how to escalate the problem

Tried adding --highvram in .bat file. It has been working well with ComfyUI_InstantID so far.