Open merlintang opened 11 months ago
Dear All
We are implementing a multi-lora framework to support fine tune llms with same base model in one GPU.
We are so glad to work with the community to make the LoRA with less GPU memory, you can check our contributions from this code repo: https://github.com/TUDB-Labs/multi-lora-fine-tune
PRs are welcome.
thanks
Dear All
We are implementing a multi-lora framework to support fine tune llms with same base model in one GPU.
We are so glad to work with the community to make the LoRA with less GPU memory, you can check our contributions from this code repo: https://github.com/TUDB-Labs/multi-lora-fine-tune
PRs are welcome.
thanks