Open homoluden opened 2 months ago
If you want to save VRAM anyway, wouldn't it be better to use a model like this that uses BitsandBytes?
https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors
Jesus, just check the response rate. Isn't it obvious that this is a dead horse?
Omost may be a frozen project, but it is perfect for Flux - due to the fact how flux handles complex promts from Omost. @lllyasviel thanks for your amazing work, it would be great to see something like Omost with Flux.
There is a new promising model. https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux
It would be good to take this model pipeline from diffusers package and add a canvas and attention / transformer additions from StableDiffusionXLOmostPipeline if possible.
Also, please take a note that there is a reduced T5 model available to save some VRAM https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main