Closed fefespn closed 3 weeks ago
You should just need to move the output model to where you normally put loras for flux and then select it in your UI.
Thanks for your reply. I didn't use flux in the past, how do you do it ? huggingface library ? I tried to search in the code where does he load the model but couldn't find !
Thanks for your reply. I didn't use flux in the past, how do you do it ? huggingface library ? I tried to search in the code where does he load the model but couldn't find !
I found Forge webui easy- it's similar to A1111 but has a dedicated flux mode. Then you just go to the LoRA tab and you can click it to add it to your prompt for T2I.
one question maybe someone else can answer.... is the flux1-dev-bnb-nf4-v2 (quantized, 12GB checkpoint OK to use with our trained LoRA? or does it only work with original flux1-dev?
thanks, I found the script. it's in sd-scripts/flux_minimal_inference.py
thanks, I found the script. it's in sd-scripts/flux_minimal_inference.py
Hi, how much vram will this script cost
Hi, I am new to diffusion models and libraries and loras. how to run the output lora weights with the flux model after the training?