invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.62k stars 2.43k forks source link

[enhancement]: Please support FLUX.1 #6712

Closed DavidCWGA closed 1 month ago

DavidCWGA commented 3 months ago

Is there an existing issue for this?

Contact Details

david@gloveraoki.net

What should this feature add?

Please support the new open-weights models from Black Forest Labs, FLUX.1.

https://huggingface.co/black-forest-labs

Alternatives

No response

Additional Content

No response

JorgeR81 commented 2 months ago

Below are additional details on which model to use based on your system:

  • FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
  • FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM

Are these minimum requirements ? Or recommendations for better performance ?
Could it work with 10 GB or 8 GB VRAM ?

damiano1996 commented 2 months ago

Hi @JorgeR81, by enabling CPU offloading I'm able to run FLUX.1-dev on my laptop with Nvidia RTX 500 Ada Generation (4094MiB) and it takes around 260 seconds to generate an image 1024x1024. But currently I'm not able to run it with InvokeAI since (in the best of my knowledge) CPU offloading is not configurable. In case, it would be awesome to make it configurable

TheBarret commented 2 months ago

I'm fairly sure there are PRs underway to get Flux working, e.g. #6739 - it just seems they're not ready yet.

When you compare InvokeAI to some of the other tools out there that have implemented it - InvokeAI is more of a product, it's large and is generally held to high standards, this is quite different from something like Comfy which is esentially a lot of different peoples scripts and packages that can be glued together in various ways - people can make great things due to the flexibility but - it's also a lot harder and much less refined.

I'm sure that if it was as simple as simply supporting the model, invoke would already have flux, but while I have no insider knowledge I'm assuming it's probably included as part of a larger piece of work with the intent of a better integrated outcome. (I'm just guessing here).

"Glued together" one way of saying its unprofessional. geesh lol

psychedelicious commented 1 month ago

Invoke v5 supports FLUX. We are working to support more of the FLUX model ecosystem. Since the initial support is in, I'm closing this issue.