WASasquatch / was-node-suite-comfyui

An extensive node suite for ComfyUI with over 210 new nodes
MIT License
1.15k stars 170 forks source link

Advanced Sampler for Latent Composition? #24

Closed cochese9000 closed 1 year ago

cochese9000 commented 1 year ago

Hi, great work on this! I barely understand how hard this is, but I'm impressed nonetherless. Also, please forgive the question:

I tested the 'Noisy Latent Composition' example on the comfyui site, which seems to work quite well. But I was trying soemthing similar to this with a specific seed (from seed node) with your ksampler (KSamplerWAS) and I reaslied it doesn't have a 'start at X step' feature, like the advanced sampler in the comfy node list.

I'm wondering if I'm doing something foolish or it just doesn't have a use-case for others yet?

FWIW, I was trying to see if I could use the same seed for a 'continuation' so there was more coherence between the two stages. But not sure if that actually makes sense. I realise i could copy-paste a seed, but this was for an automated process and the 'stop start' steps seem important.

So,

  1. Is this pointless?
  2. if it isn't, is there any intended feature that would support this? i.e. a 'start at step', 'stop at step' feature?

Thanks!

WASasquatch commented 1 year ago

ComfyUI now allows you to right click the seed, or any other field and convert it into a input. So you can do that with the advanced sampler already and then use a Primitive or Random Number + Number to Int node to connect to them. So the seed node and WAS sampler are actually not needed anymore.

And I am not sure about stopping starting at specific steps. I wanna say that's not implemented yet. I think the issue is ComfyUI is linear. Once ksampler has run, it's ran and closed. So all it's returning is a result, and then when it's ran again, it's like it's running fresh for the first time.

cochese9000 commented 1 year ago

Wow, that's so cool! I didn't know that. Thanks!

I think I see what you're saying but it looks like the comfyui example might be confusing in that regard. Or maybe it was the suggestion of one of the videos I watched on it. hmm. The example seems to say 'run this for X steps, then add this and continue'. So, it gives it some coherence, having the same basis (seed). I clearly need to understand more.

WASasquatch commented 1 year ago

Wow, that's so cool! I didn't know that. Thanks!

I think I see what you're saying but it looks like the comfyui example might be confusing in that regard. Or maybe it was the suggestion of one of the videos I watched on it. hmm. The example seems to say 'run this for X steps, then add this and continue'. So, it gives it some coherence, having the same basis (seed). I clearly need to understand more.

That shouldn't work. When you set your steps, they are calculated based on 1000 total training steps. So 10 steps is a totally different result then would be a 20 step image. They'd all need the same steps to be doing the same result. Least as far as I know about diffusion. That's why it uses special code and shares the latent and traverses the seed.