kijai / ComfyUI-MochiWrapper

Apache License 2.0
539 stars 40 forks source link

Try unloading all models before VAE if OOM #73

Open Jonseed opened 1 day ago

Jonseed commented 1 day ago

I often get OOM before the VAE decode, but if I queue again, with latents from sampling still in memory, it works. After OOM, Comfy unloads all models. I think this clears up enough vram for VAE decode. 61 frames works with 9 tiles in VAE decode.

Here is my workflow: https://gist.github.com/Jonseed/ce98489a981829ddd697fd498e2f3e22

kijai commented 1 day ago

You can also use the save latent node to save it to disk first with the Saved Latent -node, then you can try different decoding settings safely by loading it with Load Latent (it looks for the latest in your input folder)

Jonseed commented 1 day ago

I might also try adding an unload all models node in-between the sampler and decode, just before the Mochi Decode node, so VAE has all the vram.

Jonseed commented 1 day ago

Yup, adding the UnloadAllModels node after the sampler before VAE decode worked for the whole queue, and I didn't get OOM. I'm on a 3060 12GB, btw.

reydeljuego12345 commented 11 hours ago

Yup, adding the UnloadAllModels` node after the sampler before VAE decode worked for the whole queue, and I didn't get OOM. I'm on a 3060 12GB, btw.

I try this but simply don't work I have a 4090, 24 VRAM but for me is impossible to avoid OOM since few days, I don´t know what is happening.