microsoft / LoRA

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
https://arxiv.org/abs/2106.09685
MIT License
10.43k stars 669 forks source link

Support multi-lora fine tune in the same GPU #136

Open merlintang opened 11 months ago

merlintang commented 11 months ago

Dear All

We are implementing a multi-lora framework to support fine tune llms with same base model in one GPU.

We are so glad to work with the community to make the LoRA with less GPU memory, you can check our contributions from this code repo: https://github.com/TUDB-Labs/multi-lora-fine-tune

PRs are welcome.

thanks