comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
51.98k stars 5.47k forks source link

Stuck in "got prompt" when using large workflows #1917

Open jeffersoncgo opened 10 months ago

jeffersoncgo commented 10 months ago

I have a really large Workflow, wich various custom nodes, and it was working really well, until i add "to much loras", it's a custom lora script, wich has the option to bypass it as parameter. I redid the script using the core LoaderLora as base, to try to avoid this problem, but didn't work.

import os
import torch
import comfy
import folder_paths
from nodes import LoraLoader

class Lora_Loader(LoraLoader):
    @classmethod
    def INPUT_TYPES(s):
        types = super().INPUT_TYPES()
        types["required"]["lora_name"] = (folder_paths.get_filename_list("loras"),)
        types["required"]["bypass"] = ("BOOLEAN", {"default": False},)
        types["optional"] = {}
        types["optional"]["triggers"] = ("STRING", {"default": "", "multiline": True},)
        return types

    RETURN_TYPES = ("MODEL", "CLIP", "STRING", "BOOLEAN")
    RETURN_NAMES = ("MODEL", "CLIP", "LORA_NAME", "BYPASSED")
    CATEGORY = "JeffisNotHere / Loaders"

    def load_lora(self, **kwargs):
        if kwargs["bypass"]:
            kwargs["strength_model"] = 0 #Before i was just returning it here, but changed to this to still go through the core script, to see if it worked, (it didn't)
            kwargs["strength_clip"] = 0
        model_lora, clip_lora = super().load_lora(**kwargs)
        return (model_lora, clip_lora, kwargs["lora_name"], kwargs["bypass"])

NODE_CLASS_MAPPINGS = {
    "Jeff - Lora Loader": Lora_Loader,
}

image

If i try to load row by row, it works (until reach the stuck zone), but if i already in the begin try to load from the Slow zone, it also become stuck.

All of them are using it's own Bypass param as true, so the loras don't load, even so, still getting stuck, is this related to limitations in data json size/lenght in the post method?

In the print i send just a part of the Workflow, that is the part bugging.

jeffersoncgo commented 10 months ago

Oh, i almost forgot, it become stuck in here: image

No matter how much time it takes, it don't go beyond the "got prompt" message.

I edited the "execution.py" file, and inserted "print" messages in each function to see where it was stuck.

And it sometimes get stuck endelessly in recursive_output_delete_if_changed and sometimes in recursive_will_execute

poisenbery commented 8 months ago

Bumping this. I have an extremely complex workflow as well, generation will hang for about 15 minutes before actually doing anything.

I removed about 30 seconds of processing nodes, and the entire generation was shortened by 14 minutes.