mLoRA
An Efficient "Factory" to Build Multiple LoRA Adapters
mLoRA (a.k.a Multi-LoRA Fine-Tune) is an open-source framework designed for efficient fine-tuning of multiple Large Language Models (LLMs) using LoRA and its variants. Key features of mLoRA include:
Concurrent fine-tuning of multiple LoRA adapters.
Shared base model among multiple LoRA adapters.
Efficient pipeline parallelism algorithm.
Support for multiple LoRA variant algorithms and various base models.
Support for multiple reinforcement learning preference alignment algorithms.
The end-to-end architecture of the mLoRA is shown in the figure:
Firstly, you should clone this repository and install dependencies (or use our image):
# Clone Repository
git clone https://github.com/TUDB-Labs/mLoRA
cd mLoRA
# Install requirements need the Python >= 3.12
pip install .
The mlora_train.py
code is a starting point for batch fine-tuning LoRA adapters.
python mlora_train.py \
--base_model TinyLlama/TinyLlama-1.1B-Chat-v0.4 \
--config demo/lora/lora_case_1.yaml
You can check the adapters' configuration in demo folder, there are some configuration regarding the use of different LoRA variants and reinforcement learning preference alignment algorithms.
For further detailed usage information, please use --help
option:
python mlora_train.py --help
Similar to Quickstart, the command to start in a two-node environment is as follows:
NOTE1: Use environment variables MASTER_ADDR/MASTER_PORT
to set the master node.
NOTE2: Set balance, indicating the number of decoder layers allocated to each rank.
# in the first node
export MASTER_ADDR=master.svc.cluster.local
export MASTER_PORT=12355
python mlora_pp_train.py \
--base_model TinyLlama/TinyLlama-1.1B-Chat-v0.4 \
--config demo/lora/lora_case_1.yaml \
--pipeline \
--device "cuda:0" \
--rank 0 \
--balance 12 13 \
--no-recompute \
--precision fp32
# in the second node
export MASTER_ADDR=master.svc.cluster.local
export MASTER_PORT=12355
python mlora_pp_train.py \
--base_model TinyLlama/TinyLlama-1.1B-Chat-v0.4 \
--config demo/lora/lora_case_1.yaml \
--pipeline \
--device "cuda:1" \
--rank 1 \
--balance 12 13 \
--no-recompute \
--precision fp32
mLoRA offers an official Docker image for quick start and development, The image is available on Dockerhub Packages registry.
First, you should pull the latest image (the image also use for development):
docker pull yezhengmaolove/mlora:latest
Deploy and enter a container to run mLoRA:
docker run -itd --runtime nvidia --gpus all \
-v ~/your_dataset_dir:/dataset \
-v ~/your_model_dir:/model \
-p <host_port>:22 \
--name mlora \
yezhengmaolove/mlora:latest
# when the container started, use the ssh to login
# the default password is mlora@123
ssh root@localhost -p <host_port>
# pull the latest code and run the mlora
cd /mLoRA
git pull
python mlora_train.py \
--base_model TinyLlama/TinyLlama-1.1B-Chat-v0.4 \
--config demo/lora/lora_case_1.yaml
We can deploy mLoAR as a service to continuously receive user requests and perform fine-tuning task.
First, you should pull the latest image (use same image for deploy):
docker pull yezhengmaolove/mlora:latest
Deploy our mLoRA server:
docker run -itd --runtime nvidia --gpus all \
-v ~/your_dataset_cache_dir:/cache \
-v ~/your_model_dir:/model \
-p <host_port>:8000 \
--name mlora_server \
-e "BASE_MODEL=TinyLlama/TinyLlama-1.1B-Chat-v0.4" \
-e "STORAGE_DIR=/cache" \
yezhengmaolove/mlora:latest /bin/bash /opt/deploy.sh
Once the service is deployed, install and use mlora_cli.py
to interact with the server.
# install the client tools
pip install mlora-cli
# use the mlora cli tool to connect to mlora server
mlora_cli
(mLoRA) set port <host_port>
(mLoRA) set host http://<host_ip>
# and enjoy it!!
Using mLoRA can save significant computational and memory resources when training multiple adapters simultaneously.
We fine-tuned multiple LoRA adapters using four A6000 graphics cards with fp32 precision and without using checkpointing and any quantization techniques:
Model | mLoRA (tokens/s) | PEFT-LoRA with FSDP (tokens/s) | PEFT-LoRA with TP (tokens/s) |
---|---|---|---|
llama-2-7b (32fp) | 2364 | 1750 | 1500 |
llama-2-13b (32fp) | 1280 | OOM | 875 |
Model | |
---|---|
✓ | LLaMA |
Variant | |
---|---|
✓ | QLoRA,NIPS,2023 |
✓ | LoRA+,ICML,2024 |
✓ | VeRA,ICLR,2024 |
✓ | DoRA,ICML,2024 |
Variant | |
---|---|
✓ | DPO,NeurIPS,2024 |
✓ | CPO,ICML,2024 |
✓ | CIT,arXiv,2024 |
We welcome contributions to improve this repository! Please review the contribution guidelines before submitting pull requests or issues.
Fork the repository. Create a new branch for your feature or fix. Submit a pull request with a detailed explanation of your changes.
You can use the pre-commit to check your code.
# Install requirements
pip install .[ci_test]
ln -s ../../.github/workflows/pre-commit .git/hooks/pre-commit
Or just call the script to check your code
.github/workflows/pre-commit
Please cite the repo if you use the code in this repo.
@misc{m-LoRA,
author = {Zhengmao, Ye\textsuperscript{*} and Dengchun, Li\textsuperscript{*} and Jingqi, Tian and Tingfeng, Lan and Yanbo, Liang and Yexi, Jiang and Jie, Zuo and Hui, Lu and Lei, Duan and Mingjie, Tang},
title = {m-LoRA: Efficient LLM Model Fine-tune and Inference via Multi-Lora Optimization},
year = {2023},
publisher = {GitHub},
howpublished = {\url{https://github.com/TUDB-Labs/mLoRA}},
note={\textsuperscript{*}: these authors contributed equally to this work.}
}
Copyright © 2024 All Rights Reserved.
This project is licensed under the Apache 2.0 License.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.