Open mnpenner opened 7 months ago
Setting up a workflow in the style of a loop is not feasible within the current execution model. Instead, by combining the Backend Cache feature of the Inspire Pack and Auto Queue, it may be possible to utilize the Reduce node.
I was hoping to avoid having to press queue more than once or use the Auto Queue feature because it makes transitioning to a new starter image more awkward, but if that's the best that can done for now I guess I'll have to make do. Thanks for the suggestion @ltdrdata
I was hoping to avoid having to press queue more than once or use the Auto Queue feature because it makes transitioning to a new starter image more awkward, but if that's the best that can done for now I guess I'll have to make do. Thanks for the suggestion @ltdrdata
FYI, Impact Pack provides 'Queue Trigger' node. You can utilize it instead of auto queue.
The "Queue Trigger" node is working is working great, but "Retrieve Backend Data" keeps throwing an error when there's no data, which will always be the case on the first iteration. I've set up a conditional (custom "If Zero") node so that it uses a different image for the first iteration, so I'd much rather Retrieve Backend Data
just returns None
instead of complaining it can't find the key. Do you know of a workaround for that?
The "Queue Trigger" node is working is working great, but "Retrieve Backend Data" keeps throwing an error when there's no data, which will always be the case on the first iteration. I've set up a conditional (custom "If Zero") node so that it uses a different image for the first iteration, so I'd much rather
Retrieve Backend Data
just returnsNone
instead of complaining it can't find the key. Do you know of a workaround for that?
Now it returns None instead of raise exception.
It works perfectly now, thanks so much @ltdrdata !
Is this reducer possible now, with the GraphBuilder
update? I'm trying to figure it out, but there doesn't seem to be much documentation on it.
Here's my setup:
I want to invert the 3 faces, one at a time. Offscreen, I'm using SAMDetector to get the 3 masks which I pass into array
(it's actually a Tensor(shape=(3, 1, 409, 619), dtype=torch.uint8)
), then item
will be the first mask, and accum
is initialized to init
which is just the input image. I use the mask to crop the image and then paste it back into accum
. The output of that (Paste by Mask) is supposed to loop back into the reducer and repeat for the remaining 2 masks and then output the final result on final
.
This is what I've got so far:
class Reducer(MpenBaseNode):
NODE_ID = "73957e67-f36f-42e5-879d-a2a5843522d9"
NAME = "Reduce"
@ classmethod
def INPUT_TYPES(cls):
return {
"required": {
"array": (ComfyType.ANY,),
"index": (ComfyType.INT, {"default": 0}),
"accum": (ComfyType.ANY, {"lazy": True}),
},
"optional": {
"init": (ComfyType.ANY,),
},
"hidden": {
"dynprompt": "DYNPROMPT",
"unique_id": "UNIQUE_ID",
# "extra_pnginfo": "EXTRA_PNGINFO",
}
}
@ classmethod
def VALIDATE_INPUTS(s, **kwargs):
return True
RETURN_TYPES = [ComfyType.ANY, ComfyType.INT,
ComfyType.ANY, ComfyType.ANY]
RETURN_NAMES = ["item", "index", "accum", "final"]
def check_lazy_status(self, **kwargs):
log("reducer check_lazy_status")
used = []
if kwargs['index'] > 0:
used.append("accum")
return used
def run(self, **kwargs):
log("reducer", kwargs)
graph = GraphBuilder()
idx = kwargs['index']
item = kwargs['array'][idx]
accum = kwargs['init'] if idx == 0 else kwargs['accum']
final = accum if idx == len(kwargs['array']) - 1 else None
prompts = kwargs['dynprompt'].get_original_prompt()
log("PROMPTS", prompts)
return {
"result": (item, idx, accum, final),
"expand": graph.finalize(),
}
If I understand correctly, GraphBuilder
lets us dynamically rewrite the graph, so I think what I want to do duplicate everything between the Reduce
and Paste by Mask
3 times (batch size of array
) to wire them up sequentially, but I'm not sure how to do that. If anyone has pointers, that'd be appreciated.
It would be great if there was some way to loop over lists so that we can process things like "SEGS" (from Impact Pack) one item at a time. For example, I can detect all the faces an image and then output that to a list of masks. For each mask I want to upscale/enhance that part of the image and paste it into the image.
I was thinking about how this would look in the UI, and I realized this is exactly like a "reducer" in many programming languages. It could look like this:
(The mockup in the screenshot is just a shell of a node)
On the left I have some node that produces a list, doesn't matter what, and that would go into the
array
input.item
would output the first entry from the list/array, which I could process however I please, but it would most likely be merged intoaccum
(accumulator), which would most likely be an output image (e.g. paste the enhanced face back into the image). Then you take that output and feed it back into the inputaccum
of theReduce
node, thus forming a loop.Reduce
then runs again with the next item in the inputarray
and the updatedaccum
, which repeats until the list is done, and then outputs the final result onfinal
.init
is the initial value for the outputaccum
, for the first iteration of the loop. It's optional and will beNone
if not specified, just like a normal reduce function.I would implement this myself as a custom node, but as far as I know I can't output/return more than once during a single "Queue Prompt" cycle like this node needs to do.