Closed stephantual closed 6 months ago
Hm how to explain this? But first lets start with this:
As you see, 276
connects directly to Empty Batch Latent
which execute once, and 134
connects to Rebatch Latents
instead, which execute twice.
What does it means to execute
? execute once
? execute twice
?
Lets start with how ComfyUI execute node with batches input. Lets assume node SUM
just sum two numbers, therefore it have 2 inputs, a
and b
. There's two different nodes that connects to input of the node SUM
, which is NODEA
connects to input a
, and NODEB
connects to input b
. Now for these primitive nodes, let them have an ability to output more than one numbers, which is determined by widget batch_size
just like Rebatch Latents
.
When NODEA
and NODEB
have it batch_size
set to 1 and output some numbers, then internally ComfyUI does not actually output just a number, but instead wrap it within a list that contains only a single element, which is the number we are interested in (therefore it should be [11]
and [22]
, NOT 11
and 22
). Generally this concept is called batching.
When attempt to execute node SUM
, ComfyUI detected that the node received these list batches, and since every single input size is 1
, it will only execute SUM
once, then output [33]
.
But what if we increase batch_size
?
batch_size = 2
=> NODEA = [11, 22]
, NODEB = [33, 44]
Now both NODEA
and NODEB
have it output size contains two elements, but then how would ComfyUI execute SUM
?
If a node have it INPUT_IS_LIST
set to False
, ComfyUI will automatically unpack the batches, and pair each possible inputs together.
What does it means pairing
? Lets take the current example, NODEA = [11, 22]
and NODEB = [33, 44]
. Here the pairing algorithm would unwrap the two batches, then pair 11
with 33
and 22
with 44
.
Therefore since there's are two possible pairs, ComfyUI will execute SUM
first with the pair 11, 33
, then 22, 44
, which make SUM
execute twice and then ComfyUI will group the result of these two execution together, and output [44, 66]
.
If INPUT_IS_LIST
set to True
.
Assume we have node SUM2
which have INPUT_IS_LIST
set to True
unlike SUM
which have INPUT_IS_LIST
set to False
.
ComfyUI detected that SUM2
have INPUT_IS_LIST
set to True
, therefore it no longer automatically unwrap the input but instead give both input as full batch to SUM2
and execute SUM2
only ONCE.
SUM2
implementation then manually unwrap the batches. But everything else just like SUM
, as it sum each possible pair numbers together. Of course SUM2
have to manually recreate the batches correctly, and then output [44, 66]
.batch_size = 2
=> NODEA = [11, 22]
, batch_size = 1
=> NODEB = [33]
11, 33
and 22, 33
, which cause SUM
to execute twice.batch_size = 3
=> NODEA = [11, 22, 33]
, batch_size = 3
=> NODEB = [44, 55, 66]
11, 44
, 22, 55
, and 33, 66
; which cause SUM
to execute 3 times.batch_size = 3
=> NODEA = [11, 22, 33]
, batch_size = 2
=> NODEB = [44, 55]
11, 44
, 22, 55
, and 33, 55
; which cause SUM
to execute 3 times.batch_size = 3
=> NODEA = [11, 22, 33]
, batch_size = 1
=> NODEB = [44]
11, 44
, 22, 44
, and 33, 44
; which cause SUM
to execute 3 times.You get the gist how ComfyUI make pairs and execute things now. But how would ComfyUI handle the case of node with 3 or more inputs?
I'll put put this python code here. This algo replicate 1 to 1 how ComfyUI unwrap the batches:
def dict_slice(dict_list):
for i in range(max((len(v) for v in dict_list.values()), default=0)):
curr = dict()
for k, v in dict_list.items():
curr[k] = v[i] if i < len(v) else v[-1]
yield curr
print(list(dict_slice({
"aa": [11, 12],
"bb": [21, 22, 23],
"cc": [31, 32],
"dd": [41]
})))
# Output
[
{'aa': 11, 'bb': 21, 'cc': 31, 'dd': 41},
{'aa': 12, 'bb': 22, 'cc': 32, 'dd': 41},
{'aa': 12, 'bb': 23, 'cc': 32, 'dd': 41}
]
As you can see this example have 3 possible groups, which make whatever node that have 4 inputs and have INPUT_IS_LIST
set to False
will execute 3 times.
Back to the workflow. The only point of interest here is 134
have it's input hooked to Rebatch Latents
, which all of them have batch_size
set to 2
, which means 134
execute twice, and hence output this:
List of 2:
Dict of 21:
Key (tuple): ('type', 'INT')
Type: str
Key (tuple): ('data', 'INT', 0)
Type: int
Key (tuple): ('index', 'INT')
Type: int
...
Dict of 21:
Key (tuple): ('type', 'INT')
Type: str
Key (tuple): ('data', 'INT', 0)
Type: int
Key (tuple): ('index', 'INT')
Type: int
...
Which is exactly what we would expect:
Which means there's are now 2 different Junction objects in same batch, both originate from the same 134
node, and each of them have exactly one CLIP since there's only a single CLIP provided. Which means ComfyUI when detecting this case, the algo will then execute Beautify
, but since Beautify
have INPUT_IS_LIST
set to True
, it will only execute Beautify
once and just give a batch of two CLIP [CLIP, CLIP]
to it directly.
CLIP
came from? The implementation of Junction
first detect the type of output, which is CLIP
. Since 134
junction execute twice, it will output [(junc_object, CLIP), (junc_object, CLIP)]
and ComfyUI will regroup these two CLIP
into a single batch since both is in the same output position.For the case of 276
, it behave as expected since there's only single CLIP
output and the junction algo will give exactly just that. If you want to get the second CLIP
then you have to hook another node that accept CLIP input to 276
output.
Another thing is Junction
have INPUT_IS_LIST
set to False
while JunctionBatch
set to True
. But explaining this is will make the comment too long but hopefully after you read how ComfyUI execute nodes, you should understand how JunctionBatch
behave differently with pluck
and batch
mode.
As for Rebatch Latent
itself, the Empty Latent Image
have batch_size
set to 3
, but internally it is not actually a list batch. It is instead a torch.tensor
that have something like torch.tensor([[...], [...], [...]])
. Rebatch Latent
is basically convert that to actual batch list.
For example as Rebatch Latent
have batch_size
set to 3
then it do nothing, but when set to 2
when it will output [torch.tensor([[...], [...]]), torch.tensor([[...]])]
, and therefore for 1
will be [torch.tensor([[...]]), torch.tensor([[...]]), torch.tensor([[...]])]
.
And therefore final output is:
Empty Latent Image
)Empty Latent Image
)Empty Latent Image
)Empty Latent Image
)Empty Latent Image
)Empty Latent Image
)Which is expected.
TL;DR: there's no bug, it just how Junction
being executed :)
If you have any more question feels free to comment despite of issue being closed.
I have no words - such a nice reply exactly at 00:00 in the UK on NYE! Thank you!
potential bug cache -workflow.json
EDIT - i made a mistake and forgot to rebatch the latents. Now im even more confused - I'm sorry :) Still, the caching issue on older version might be real, it's just I don't understand how the batchers 'organize' the data internally. Can you maybe make a VERY simple example of a 2 seed, 1 model, 1 latent example using the junction batcher ? This way people like myself could learn it quicker :)
I tool the Junction workflow you provided in the repo. I noticed there was two models next to it but only one connected. I didn't understand (and still don't) - why the output was :
instead of the 1/1/1/1/1/1/1 i expected.
I hoooked up beautify to the output and noticed it was outputing TWO clips (in fact, 2 of everything). Not one - which makes no sense as only one model is connected and at that point no batching occured yet.
So i took a brand new junction node and reconnected it, and sure enough only ONE clips came out. But the original node, which I kept as a backup, shows 2 clips and therefore behaves incorrectly.
I asked myself "what could possibly cause this" and remembered that first model loader, unconnected. So i suspect that somewhere in the process, you wanted to demo 3 seeds, 2 models, 3 latents for a total of 12 images. You probably also wanted to show you could group by latent to pluck aspect rations out of the last junction node.
Therefore, it's strongly likely that the first junction node is buggy , caching the previous connection to the former model.
So i went a step further - i added another model to my brand new junction node, and it worked as intended - the node was NOT spitting out 2 concaneted clips. I removed it, saved the workflow and there it is for your review.
TLDR: I think the junction node can be buggy, or was buggy and isn't anymore, but the demo workflows have inherited the buggy behaviour.
Let me know if i was on target :) Great job by the way, this is the best node set in the whole world 💯