LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models
Yang Yang, Wen Wang, Liang Peng, Chaotian Song, Yao Chen, Hengjia Li, Xiaolong Yang, Qinglin Lu, Deng Cai, Wei Liu, Boxi Wu
conda create -n loracomposer python=3.10 -y
conda activate loracomposer
# install PyTorch
conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch -y
git clone https://github.com/Young98CN/LoRA_Composer.git
cd LoRA_Composer
pip install .
# Install diffusers==0.14.0 with T2I-Adapter support(install from source)
git clone -b T2IAdapter-for-mixofshow https://github.com/guyuchao/diffusers-t2i-adapter.git
cd diffusers-t2i-adapter
pip install .
We adopt the ChilloutMix for real-world concepts, and Anything-v4 for anime concepts.
cd experiments/pretrained_models
# Diffusers-version ChilloutMix
git lfs clone https://huggingface.co/windwhinny/chilloutmix.git
# Diffusers-version Anything-v4
git lfs clone https://huggingface.co/xyn-ai/anything-v4.0.git --exclude="anything*, Anything*, example*, *.safetensors"
mkdir t2i_adapter
cd t2i_adapter
# sketch/openpose adapter of T2I-Adapter
wget https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_sketch_sd14v1.pth
wget https://huggingface.co/TencentARC/T2I-Adapter/resolve/main/models/t2iadapter_openpose_sd14v1.pth
If you want to quickly reimplement our methods, we provide the following resources used in the paper.
Paper Resources | Concept Datasets | Single-Concept ED-LoRAs |
---|---|---|
Download Link | Google Drive | Google Drive |
After downloading, the path should be arranged as follows:
LoRA_Composer
βββ mixofshow
βββ scripts
βββ options
βββ experiments
β βββ composed_edlora # composed ED-LoRA
β βββ pretrained_models
β β βββ anything-v4.0
β β βββ chilloutmix
β β βββ t2i_adpator/t2iadapter_*_sd14v1.pth
βββ datasets
β βββ data # ** Put the dataset in here **
β β βββ characters/
β β βββ objects/
β β βββ scenes/
β βββ data_cfgs/lora_composer
β β βββ single-concept # train single-concept edlora configs
β β βββ region_lora # merge edlora configs
βββ loras # ** Put Single-Concept ED-LoRAs in here **
β βββ anime
β βββ background
β βββ real
...
# in the path of root folder
find scripts/lora_composer_scripts/merge_EDLoRA -type f -name "*.sh" -exec echo "Executing {}" \; -exec bash {} \;
python scripts/lora_composer_scripts/merge_EDLoRA/link_lora2folder.py
1. Partial results of the paper can be obtained by running the following command (**The ββimage_guidanceβ option is used to activate conditions such as pose and sketch.**)
```bash
bash scripts/lora_composer_scripts/paper_result_scripts/lora_composer_anime.sh
bash scripts/lora_composer_scripts/paper_result_scripts/lora_composer_real.sh
# The newer release version should convert to the LoRA checkpoint fist
python scripts/lora_composer_scripts/convert_old_EDLoRA.py ${ckpt_path} ${save_path}
bash scripts/lora_composer_scripts/merge_EDLoRA/anime/merge_ai.sh
## π License and Acknowledgement
This project is released under the [Apache 2.0 license](LICENSE).<br>
This codebase builds on [Mix-of-Show](https://github.com/TencentARC/Mix-of-Show/tree/research_branch). Thanks for open-sourcing! Besides, we acknowledge following great open-sourcing projects:
- T2I-Adapter (https://github.com/TencentARC/T2I-Adapter).
## π Citation
```bibtex
@article{yang2024loracomposer,
title = {LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models},
author = {Yang Yang and Wen Wang and Liang Peng and Chaotian Song and Yao Chen and Hengjia Li and Xiaolong Yang and Qinglin Lu and Deng Cai and Boxi Wu and Wei Liu},
year = {2024},
journal = {arXiv preprint arXiv: 2403.11627}
}
If you have any questions and improvement suggestions, please email Yang Yang (yangyang98@zju.edu.cn), or open an issue.