intel-analytics / ipex-llm

Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, GraphRAG, DeepSpeed, Axolotl, etc
Apache License 2.0
6.74k stars 1.27k forks source link

Feature Request: RoSA and QRoSA #10755

Open ElliottDyson opened 7 months ago

ElliottDyson commented 7 months ago

It would be brilliant if we could get implemented fine-tuning methods for robust adaptation given how much better it is than LoRA and QLoRA methods.

Uxito-Ada commented 7 months ago

Hi @ElliottDyson , thanks for your proposal.

Currently we provide many fine-tuning options e.g. ReLoRA, axolotl and DPO etc. as shown here, as well as Galore and LISA on way, where some can outperform LoRA.

We are going to investigate and evaluate whether to support RoSA and QRoSA.