Closed anujnayyar1 closed 2 months ago
Also just to check, it doesn't seem like the consistent self attention is being applied in the flux.1 models. Can you confirm what is happening here.
Yes, the flux model currently does not include consistent self attention. The previous self attention was designed based on SDXL, and I did not test whether it can function properly on flux. I am only working on this project based on my interests. If you have any special suggestions, I may do them in my free time, or I may not have the time to do them.
Firstly a huge thank you for adding the flux.1 models and for this amazing project!!! We are currently reloading the model between each generation and allows for multiple characters. This saves VRAM but increases execution time significantly. Is it possible to have an option to not offload the model between each generation.
Also would love to sponser some features / talk to you about a potential collaboration. What is the best way to contact you?