Currently only works with ModelScope based models.
Stable LoRA is simply a version using the official Microsoft's implementation, and is made specifically for Stable Diffusion based models. While still in preview in its home repository, this release is fully functional.
In short, this allows you to use LoRA models on the fly during inference. The only files that are supported are the ones trained in the aforementioned repository.
What you cannot do:
Use LoRA files that were made for SD image models in other trainers.
Weight each LoRA individually (currently they are weighted depending on how many you load, and is also experimental).
Train a LoRA using this extension (yet?)
Set which layers (Linears, Convolutions, etc.) to turn off. This is hard coded at False on this line , but will be full featured at a later time. Reason being is that I would have to couple the logic of multiple LoRAs together when doing weight merges, and I don't see it as a high priority feature at the moment (also requires a good bulk of testing).
Dev notes:
I've also added an extension helper to this release as well. It should serve as a simple baseline to make any other extensions.
How to use
Simply place the LoRA files after training in your webui lora models directory. Everything else will be taken care of, and will show up in the list.
This PR is ready to go to market as the training code is already available in the finetuning repository (just pull the PR). For ease of use (non developer / code friendly individuals) will need hold until the fine tuning repository PR is completed.
UPDATE:
The finetune PR is ready to go. I'm just testing for any last bugs before committing.
UPDATE 2:
The finetune PR is now merged into main.
Currently only works with ModelScope based models. Stable LoRA is simply a version using the official Microsoft's implementation, and is made specifically for Stable Diffusion based models. While still in preview in its home repository, this release is fully functional.
Please read the open PR on the finetuning repositories for details and tracking. https://github.com/ExponentialML/Text-To-Video-Finetuning/pull/90#issue-1795207411
In short, this allows you to use LoRA models on the fly during inference. The only files that are supported are the ones trained in the aforementioned repository.
What you cannot do:
False
on this line , but will be full featured at a later time. Reason being is that I would have to couple the logic of multiple LoRAs together when doing weight merges, and I don't see it as a high priority feature at the moment (also requires a good bulk of testing).Dev notes:
I've also added an extension helper to this release as well. It should serve as a simple baseline to make any other extensions.
How to use
Simply place the LoRA files after training in your webui lora models directory. Everything else will be taken care of, and will show up in the list.
This PR is ready to go to market as the training code is already available in the finetuning repository (just pull the PR). For ease of use (non developer / code friendly individuals) will need hold until the fine tuning repository PR is completed.
UPDATE:
The finetune PR is ready to go. I'm just testing for any last bugs before committing.
UPDATE 2: The finetune PR is now merged into main.