Closed JorgeR81 closed 2 months ago
By the way, don't try to replicate this yourself, unless you have 64 GB RAM ( or preferably 128 GB ).
I only have 32 GB RAM, and it was quite stressful to my system.
My page file increased over 40 GB, the system slowed down, it took several minutes to cancel the queue in the Comfy UI.
I just realized I was using --force-fp32
, at this time.
That's the reason for the system slowdown, while loading UNET model, in fp16 mode
Expected Behavior
Flux Unet fp8 model is unloaded from RAM after switching to fp16 in the node settings.
Actual Behavior
Both models will be kept on RAM ( or page file in my case ), and the generation will slow down to a crawl.
Steps to Reproduce
default
) in the Loader node and generate again.Debug Logs
Other
I'm on Windows 10
I don't know what commit caused this exactly, because I've been using the checkpoint version lately. But I remember, I had no issues with this before.