-
Is it possible to train Aria with QLoRA?
-
qlora请问是对fp16微调呢还是int4微调呢?
为什么我跑出来的结果需要加fp16原始的模型参数呢,这样就很大了
-
I realize OpenVINO was originally made for vision models but I'm interested in using OpenVINO for fine-tuning LLMs. It appears there is support to fine-tune for ViT models but not for language models…
-
### Describe the feature
How can we support LoRA/QLoRA in Gemini or TorchFSDP plugin?
If there’s documentation on this feature, it might encourage community contributions.
Thanks a lot.
-
Does sgl support qlora? Could you provide some instructions on how to use it?
-
-
In the example: example/CPU/QLoRA-FineTuning/qlora_finetuning_cpu.py
It mentions on a comment that nf4 is not supported on cpu yet but when I change the example from int4 -> nf4 it still runs witho…
-
currently only original LORA is supported as not fused adapter, I hope to be able to add the support for QLORA/QA-LORA support for the adapters, without fusing with the base model.
-
### 🚀 The feature, motivation and pitch
I see that llama-stack is becoming very powerful set of tool which sits on top of LLM model.
inference, memory, agent, scoring, eval etc can be used via APIs…
-
I have installed trl