MountaintopLotus / braintrust

A Dockerized platform for running Stable Diffusion, on AWS (for now)
Apache License 2.0
1 stars 2 forks source link

LoRA #36

Open JohnTigue opened 1 year ago

JohnTigue commented 1 year ago

is a second-gen fine-tuning method. One of its goals is to have more sharable (that is, smaller) models by training only the delta, not the full model. This also means the fine-tuning delta can be applied with a slider from 0%--100%. Although LoRA is early in development, this looks very promising.

JohnTigue commented 1 year ago

LoRA is definitely going to make waves:

JohnTigue commented 1 year ago

How to extract small LoRA file from custom dreambooth models . Reduce your model sizes!:

Disclaimer : I tried on one custom dreambooth model only and it worked like charm. If more "Style" models and DB models can be extracted, it would be of tremendous value to reduce their filesizes.

JohnTigue commented 1 year ago

Olivio: LORA: Install Guide and Super-High Quality Training - with Community Models!!!

JohnTigue commented 1 year ago

MERGE A FACE & STYLE With LORA EXTRACTION In Stable Diffusion! NO TRAINING!

JohnTigue commented 1 year ago

LORA with ControlNET - Get the BEST results - Complete Guide // Jenna Ortega