Open Jonseed opened 1 day ago
You can also use the save latent node to save it to disk first with the Saved Latent -node, then you can try different decoding settings safely by loading it with Load Latent (it looks for the latest in your input folder)
I might also try adding an unload all models node in-between the sampler and decode, just before the Mochi Decode node, so VAE has all the vram.
Yup, adding the UnloadAllModels
node after the sampler before VAE decode worked for the whole queue, and I didn't get OOM. I'm on a 3060 12GB, btw.
Yup, adding the UnloadAllModels` node after the sampler before VAE decode worked for the whole queue, and I didn't get OOM. I'm on a 3060 12GB, btw.
I try this but simply don't work I have a 4090, 24 VRAM but for me is impossible to avoid OOM since few days, I don´t know what is happening.
I often get OOM before the VAE decode, but if I queue again, with latents from sampling still in memory, it works. After OOM, Comfy unloads all models. I think this clears up enough vram for VAE decode. 61 frames works with 9 tiles in VAE decode.
Here is my workflow: https://gist.github.com/Jonseed/ce98489a981829ddd697fd498e2f3e22