jags111 / efficiency-nodes-comfyui

A collection of ComfyUI custom nodes.- Awesome smart way to work with nodes!
https://civitai.com/models/32342
GNU General Public License v3.0
902 stars 93 forks source link

Memory leak after some update #227

Open druvissiksalietis opened 1 month ago

druvissiksalietis commented 1 month ago

For long time my workflow worked w/o any issue: I use auto queue option in Comfy UI. I have not noticed what update broke something. Now it fails after 2 or 3 itteration with error "!!! Exception during processing!!! Invalid buffer size: 8.00 GB"

Here is call stack: !!! Exception during processing!!! Invalid buffer size: 8.00 GB Traceback (most recent call last): File "/Users/tester/AI_tools/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 732, in sample samples, images, gifs, preview = process_latent_image(model, seed, steps, cfg, sampler_name, scheduler, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/custom_nodes/efficiency-nodes-comfyui/efficiency_nodes.py", line 618, in process_latent_image samples = KSampler().sample(latent_upscale_model, hires_seed, hires_steps, cfg, sampler_name, scheduler, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/nodes.py", line 1382, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/nodes.py", line 1352, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample raise e File "/Users/tester/AI_tools/ComfyUI/custom_nodes/ComfyUI-Impact-Pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/sample.py", line 43, in sample samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 829, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 729, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 716, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 695, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 600, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/k_diffusion/sampling.py", line 600, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 682, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 685, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/model_base.py", line 124, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 852, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/ldm/modules/attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/ldm/modules/attention.py", line 581, in forward n = self.attn1(n, context=context_attn1, value=value_attn1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/Library/Caches/pypoetry/virtualenvs/comfyui-9uFEEIhY-py3.11/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/ldm/modules/attention.py", line 475, in forward out = optimized_attention(q, k, v, self.heads, attn_precision=self.attn_precision) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tester/AI_tools/ComfyUI/comfy/ldm/modules/attention.py", line 407, in attention_pytorch out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Invalid buffer size: 8.00 GB

druvissiksalietis commented 1 month ago

Here is list of virtual environment packages: accelerate 0.33.0 aiohappyeyeballs 2.3.4 aiohttp 3.10.0 aiosignal 1.3.1 annotated-types 0.7.0 anyio 4.4.0 asttokens 2.4.1 attrs 23.2.0 certifi 2024.7.4 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 clip-interrogator 0.6.0 cmake 3.30.1 coloredlogs 15.0.1 contourpy 1.2.1 cryptography 43.0.0 cstr 0.1.0 cycler 0.12.1 datasets 2.20.0 decorator 4.4.2 deepdiff 7.0.1 Deprecated 1.2.14 dill 0.3.8 distro 1.9.0 einops 0.8.0 executing 2.0.1 fairscale 0.4.13 ffmpy 0.3.0 filelock 3.15.4 flatbuffers 24.3.25 fonttools 4.53.1 frozenlist 1.4.1 fsspec 2024.5.0 ftfy 6.2.0 gitdb 4.0.11 GitPython 3.1.43 groq 0.9.0 h11 0.14.0 httpcore 1.0.5 httpx 0.27.0 huggingface-hub 0.24.5 humanfriendly 10.0 idna 3.7 imageio 2.34.2 imageio-ffmpeg 0.5.1 img2texture 1.0.6 ipython 8.26.0 jedi 0.19.1 Jinja2 3.1.4 joblib 1.4.2 jsonschema 4.23.0 jsonschema-specifications 2023.12.1 kiwisolver 1.4.5 kornia 0.7.3 kornia_rs 0.1.5 lazy_loader 0.4 llvmlite 0.43.0 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.1 matplotlib-inline 0.1.7 matrix-client 0.4.0 mdurl 0.1.2 moviepy 1.0.3 mpmath 1.3.0 multidict 6.0.5 multiprocess 0.70.16 networkx 3.3 ninja 1.11.1.1 numba 0.60.0 numpy 1.26.4 onnx 1.16.2 onnxruntime 1.18.1 open_clip_torch 2.26.1 opencv-python 4.10.0.84 opencv-python-headless 4.7.0.72 optimum 1.21.2 optimum-quanto 0.2.4 ordered-set 4.1.0 packaging 24.1 pandas 2.2.2 parso 0.8.4 peft 0.12.0 pexpect 4.9.0 piexif 1.1.3 pilgram 1.2.1 Pillow 9.5.0 pip 24.2 platformdirs 4.2.2 pooch 1.8.2 proglog 0.1.10 prompt_toolkit 3.0.47 protobuf 5.27.3 psutil 6.0.0 ptyprocess 0.7.0 pure_eval 0.2.3 py-cpuinfo 9.0.0 pyarrow 17.0.0 pyarrow-hotfix 0.6 pycparser 2.22 pydantic 2.8.2 pydantic_core 2.20.1 PyGithub 2.3.0 Pygments 2.18.0 PyJWT 2.9.0 PyMatting 1.1.12 PyNaCl 1.5.0 pynvml 11.5.3 pyparsing 3.1.2 python-dateutil 2.9.0.post0 python-dotenv 1.0.1 pytz 2024.1 PyYAML 6.0.1 referencing 0.35.1 regex 2024.7.24 rembg 2.0.57 requests 2.32.3 rich 13.7.1 rpds-py 0.19.1 safetensors 0.4.3 scikit-image 0.24.0 scikit-learn 1.5.1 scipy 1.14.0 seaborn 0.13.2 segment-anything 1.0 Send2Trash 1.8.3 sentence-transformers 3.0.1 sentencepiece 0.2.0 setuptools 68.0.0 shellingham 1.5.4 simpleeval 0.9.13 six 1.16.0 smmap 5.0.1 sniffio 1.3.1 soundfile 0.12.1 spandrel 0.3.4 stack-data 0.6.3 sympy 1.13.1 threadpoolctl 3.5.0 tifffile 2024.7.24 timm 1.0.8 tokenizers 0.19.1 torch 2.5.0.dev20240803 torchaudio 2.4.0.dev20240803 torchsde 0.2.6 torchvision 0.20.0.dev20240803 tqdm 4.66.4 traitlets 5.14.3 trampoline 0.1.2 transformers 4.42.4 typer 0.12.3 typing_extensions 4.12.2 tzdata 2024.1 ultralytics 8.2.71 ultralytics-thop 2.0.0 urllib3 1.26.19 wcwidth 0.2.13 wheel 0.40.0 wrapt 1.16.0 xxhash 3.4.1 yarl 1.9.4

jags111 commented 1 month ago

Oh.. this needs to be studied as the issue caused is coming from multiple node groups. May be can you update comfyUI and all main dependent nodes.. also may be debug node from mtb can be useful in finding some problem nodes so can get an idea on which node is causing the same in your workflow.

druvissiksalietis commented 1 month ago

I updates Comfy UI and done more testing. Still geting same issue. Sometimes get also this error: !!! Exception during processing!!! MPS backend out of memory (MPS allocated: 15.22 GB, other allocations: 52.97 MB, max allowed: 18.13 GB). Tried to allocate 6.02 GB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). Traceback (most recent call last): File "/Users/druvissiksalietis/AI_tools/ComfyUI/execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/druvissiksalietis/AI_tools/ComfyUI/execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/druvissiksalietis/AI_tools/ComfyUI/execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) One observation: When manually queue multiple tasks it fails after 5-8 task, but in auto queue after 2-4. Do You know any way to enable debugging in Comfy UI to get more info?

druvissiksalietis commented 4 weeks ago

Update: Initially I use SD 1.5 models and for long time samplers worked w/o issues: dpmmp_2 + kerras. I have done experiments and replace sampler to euler - now my workflow works w/o memory issues.

Looks like something is broken in samplers.

druvissiksalietis commented 4 weeks ago

I have done more tests: SD 1.5 model, 25 step. Here is plot of sampler and scheduler combination (default a lot of is not working) engine_tests_00002

druvissiksalietis commented 4 weeks ago

Only when I add parameter --use-pytorch-cross-attention in Comfy UI all samplers start working, but it is slower and looks like this eats more memory. engine_tests_00003_

jags111 commented 3 weeks ago

sure will check and reply