Trung0246 / ComfyUI-0246

Random nodes for ComfyUI
MIT License
93 stars 10 forks source link

I think the junction node has (OR HAD) a cache problem #16

Closed stephantual closed 6 months ago

stephantual commented 6 months ago

potential bug cache -workflow.json

EDIT - i made a mistake and forgot to rebatch the latents. Now im even more confused - I'm sorry :) Still, the caching issue on older version might be real, it's just I don't understand how the batchers 'organize' the data internally. Can you maybe make a VERY simple example of a 2 seed, 1 model, 1 latent example using the junction batcher ? This way people like myself could learn it quicker :)

I tool the Junction workflow you provided in the repo. I noticed there was two models next to it but only one connected. I didn't understand (and still don't) - why the output was :

instead of the 1/1/1/1/1/1/1 i expected.

I hoooked up beautify to the output and noticed it was outputing TWO clips (in fact, 2 of everything). Not one - which makes no sense as only one model is connected and at that point no batching occured yet.

So i took a brand new junction node and reconnected it, and sure enough only ONE clips came out. But the original node, which I kept as a backup, shows 2 clips and therefore behaves incorrectly.

I asked myself "what could possibly cause this" and remembered that first model loader, unconnected. So i suspect that somewhere in the process, you wanted to demo 3 seeds, 2 models, 3 latents for a total of 12 images. You probably also wanted to show you could group by latent to pluck aspect rations out of the last junction node.

Therefore, it's strongly likely that the first junction node is buggy , caching the previous connection to the former model.

So i went a step further - i added another model to my brand new junction node, and it worked as intended - the node was NOT spitting out 2 concaneted clips. I removed it, saved the workflow and there it is for your review.

TLDR: I think the junction node can be buggy, or was buggy and isn't anymore, but the demo workflows have inherited the buggy behaviour.

Let me know if i was on target :) Great job by the way, this is the best node set in the whole world 💯

Trung0246 commented 6 months ago

Hm how to explain this? But first lets start with this:

image

As you see, 276 connects directly to Empty Batch Latent which execute once, and 134 connects to Rebatch Latents instead, which execute twice.

Back to the workflow. The only point of interest here is 134 have it's input hooked to Rebatch Latents, which all of them have batch_size set to 2, which means 134 execute twice, and hence output this:

List of 2:
  Dict of 21:
    Key (tuple): ('type', 'INT')
    Type: str
    Key (tuple): ('data', 'INT', 0)
    Type: int
    Key (tuple): ('index', 'INT')
    Type: int
    ...
  Dict of 21:
    Key (tuple): ('type', 'INT')
    Type: str
    Key (tuple): ('data', 'INT', 0)
    Type: int
    Key (tuple): ('index', 'INT')
    Type: int
    ...

Which is exactly what we would expect:

image

Which means there's are now 2 different Junction objects in same batch, both originate from the same 134 node, and each of them have exactly one CLIP since there's only a single CLIP provided. Which means ComfyUI when detecting this case, the algo will then execute Beautify, but since Beautify have INPUT_IS_LIST set to True, it will only execute Beautify once and just give a batch of two CLIP [CLIP, CLIP] to it directly.

For the case of 276, it behave as expected since there's only single CLIP output and the junction algo will give exactly just that. If you want to get the second CLIP then you have to hook another node that accept CLIP input to 276 output.


Another thing is Junction have INPUT_IS_LIST set to False while JunctionBatch set to True. But explaining this is will make the comment too long but hopefully after you read how ComfyUI execute nodes, you should understand how JunctionBatch behave differently with pluck and batch mode.


As for Rebatch Latent itself, the Empty Latent Image have batch_size set to 3, but internally it is not actually a list batch. It is instead a torch.tensor that have something like torch.tensor([[...], [...], [...]]). Rebatch Latent is basically convert that to actual batch list.

For example as Rebatch Latent have batch_size set to 3 then it do nothing, but when set to 2 when it will output [torch.tensor([[...], [...]]), torch.tensor([[...]])], and therefore for 1 will be [torch.tensor([[...]]), torch.tensor([[...]]), torch.tensor([[...]])].

And therefore final output is:

Which is expected.


TL;DR: there's no bug, it just how Junction being executed :)

Trung0246 commented 6 months ago

If you have any more question feels free to comment despite of issue being closed.

stephantual commented 6 months ago

I have no words - such a nice reply exactly at 00:00 in the UK on NYE! Thank you!