Closed derpina-ai closed 4 weeks ago
What, Flux? Being full fp16 doesn't mean T5/Clip/Vae are included. They must be loaded from somewhere, whether baked in or from external files, in any case and any UI.
Running on 24GB VRAM should be possible with such combinations, I recall running full fp16 flux with full fp16 T5 in older commits as well. Try setting the weights slider to about 23500MB and make sure there's about 40GB or more virtual memory set up (I didn't really see any writes to it in practice but the allocation does bloat up).
What, Flux? Being full fp16 doesn't mean T5/Clip/Vae are included. They must be loaded from somewhere, whether baked in or from external files, in any case and any UI.
Running on 24GB VRAM should be possible with such combinations, I recall running full fp16 flux with full fp16 T5 in older commits as well. Try setting the weights slider to about 23500MB and make sure there's about 40GB or more virtual memory set up (I didn't really see any writes to it in practice but the allocation does bloat up).
Yes, Flux. I think I get it. Some Fp8 models have t5/clip/vae, but none of the Fp16 models have them included.
Hi, is this intended behavior for Forge UI? People are reporting that in ComfyUI a full model only requires the model in fp16. In Forge, most fp16 models I have (~22GB each) still ask for T5/Clip/Vae, making it impossible for me to run them, as I go over 24GB VRAM trying to fit everything in twice.