cloneofsimo / lora

Using Low-rank adaptation to quickly fine-tune diffusion models.
https://arxiv.org/abs/2106.09685
Apache License 2.0
6.94k stars 479 forks source link

Add Lora layer on the fly? #190

Open Marcophono2 opened 1 year ago

Marcophono2 commented 1 year ago

Hello! Great work, thank you very much! But I have a general question and was not able to google the answer for this. From my understanding Lora has the big advantage not to change the main model of Stable Diffusion. I think, so far I am correct. On my server there is the stable diffusion model long-term saved in my GPU VRAM, so no time to load it. Let's say, I have two different trained Lora layers. Can I then add such layer on the fly to the main model or do I have to merge main model and Lora layer first? As I would work with very many different Lora layers which would be used each time just once and then using the next layer, and so on, it would decrease the overall performance by about factor 6. The image processing for a 768x768 image takes one second. Re-loading the main model 5 seconds. Is 768x768 possible or only 512x512? That would be enough for me but the difference is then factor 11: 0.5 seconds for the image generation, 5 seconds for re-loading the main model.

Best regards Marc

sleep2death commented 1 year ago

try to use diffuser's train_dreambooth_lora instead?