-
Tried finetuning using script/custom/finetune_qlora.sh but when i load the model using inference it was not working. How to load the model using the weight finetuned using qlora Tried this code
```
…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### Model Input Dumps
_No response_
### 🐛 Describe t…
-
Hi!
Windows 11
when I'm installing version Llama3.2-3B-Instruct:int4-qlora-eo8
I get an error NotADirectoryError: [WinError 267] The directory name is invalid: because of the colon ":" in the name
…
-
### ⚠️ Please check that this feature request hasn't been suggested before.
- [X] I searched previous [Ideas in Discussions](https://github.com/OpenAccess-AI-Collective/axolotl/discussions/categories…
-
使用bitsandbytes模式的qlora微调后,如何使用llamafactory-cli chat推理,且base模型为bnb量化后的模型?vllm目前还不支持qwen2的bitsandbytes模式推理。
-
Hello, thank you for your work first. I'm trying to add a few tokens to fine-tune the model afterward, but I'm facing a few errors.
**First, I downloaded the model:**
```
max_seq_length = 4096 …
-
I adapted your fine tuning notebook to a regression task. Unfortunately the model gets unstable during training and only returns NANs as hidden states to my regression head.
Also I wanted to try ql…
-
Are there plans to integrate QLora to this tuner? Does it require structural changes to support it?
https://github.com/artidoro/qlora
It's already great as is; but the 4bit quantized models are si…
-
how to finetune Videollama2 chat models using QLoRA and LoRA.
...
--data_path datasets/custom_sft/custom.json
--data_folder datasets/custom_sft/
--pretrain_mm_mlp_adapter CONNECTOR_DOWNLOAD_PAT…
-
While executing the file in folder `Olive/examples/llama2` i got the error of
TypeError: LlamaForCausalLM.forward() got an unexpected keyword argument 'past_key_values.0.key'
while executing :
`py…