-
## Description
We observe no improvement with PaddingFree on QLoRA and GPTQ-LoRA when running benchmarks on OrcaMath.
However,
- additionally applying FOAK along with PaddingFree shows signif…
-
Provide an approach allowing to finetune LLM models using lora more efficiently.
-
Hi, I just followed recipes/zephyr-7b-beta/dpo/config_qlora.yaml and hope to replicate the experiments. I was training on A10G, with 1 gpu, and the only modification I did was reducing the train_batch…
-
https://github.com/artidoro/qlora
https://arxiv.org/abs/2305.14314
> We present QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a s…
-
Hi there,
Not sure where it is relevant here.
Is CTranslate2 going to support QLoRA? Please see the following paper for more information:
https://arxiv.org/abs/2305.14314
Thanks.
-
文档挺详细的,按照文档一步一步来,但是在进行微调的时候有如下报错。因为我是在Windows11上进行的,不知道这个问题您能否给出一些意见?
```
(internlm2) D:\pythondev\PyProjects\XTuner>xtuner train D:\pythondev\PyProjects\xtuner-0.1.18\config\internlm2_1_8b_qlora_al…
-
Dear VideoLLaMA2 Maintainers,
I have been using your library and successfully fine-tuned models with LoRA and QLoRA on my own dataset. However, I noticed that the repository does not include code f…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
Using this issue to track the work we need to do to enable this. We'll capture our learnings, TODOs and PRs over here so everyone can follow along.
-
### System Info
---
**Setup Summary for LoRAX Benchmarking with Llama-2 Model:**
- **Hardware**: A100 40 GB (a2-highgpu-2g) on Google Kubernetes Engine (GKE)
- **Image**: ghcr.io/predibase…