kongds / MoRA

MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
https://arxiv.org/abs/2405.12130
Apache License 2.0
341 stars 20 forks source link

Integrate into torchtune #8

Closed impredicative closed 5 months ago

impredicative commented 5 months ago

torchtune is a native-PyTorch library for LLM fine-tuning.

With regard to integrating MoRA into torchtune, please refer to this discussion: https://github.com/pytorch/torchtune/discussions/1043

kongds commented 5 months ago

Thanks for your interest in our method.

For the different types of MoRA in discussion, we only use the 1 and 6 flags. The types like 2, 3, and 4 are internally used for ReMoRA to change types of sharing groups during training.

impredicative commented 5 months ago

Ah, got it. Well if you can implement your ideas into torchtune, it could go a long way into easing their adoption, but only if you really believe in the approach.

kongds commented 5 months ago

Thanks for the advice. I admit torchtune is a convenient tool for fine-tuning LLMs, but I am not familiar with it. Additionally, we have provided a customized PEFT library that includes MoRA in our repository, which is compatible with previous peft training scripts.

impredicative commented 5 months ago

What the suggested integration does is it give more exposure to the technique.