-
Does LoftQ support when we need to train the embedding layer with QLoRA?
-
尝试在12G卡上训练 python qlora.py --model_name="chinese_alpaca" --model_name_or_path="./model_hub/chinese-alpaca-7b" --trust_remote_code=False --dataset="msra" --source_max_len=128 --target_max_len=64 --do_t…
-
I finetuned llava-phi3 model with lora, but when I try to convert the resulting weight, an error occurred.
This I my command:
xtuner convert pth_to_hf ./my_configs/llava_phi3_mini_qlora_clip_vit_l…
-
1. Nomal float + Double quantization
QLoRA currently uses zero shot quantization which is different from GPTQ. However, unlike GPTQ, it does not require data, but incurs some performance loss. Theref…
-
It would be fantastic if we could add the ability to do LoRA fine-tuning and merging of adapters.
**Background on QLoRA**
- Interestingly, for many fine-tunings, the results of QLoRA are very simi…
-
In qlora.py line https://github.com/artidoro/qlora/blob/main/qlora.py#L279 , if fp16 is specified we assign torch_dtype to torch.float32? Shall we do torch.float16 instead, or this is intentional, if …
-
Traceback (most recent call last):
File "/mnt/ssd/lcq/qlora-main/qlora.py", line 791, in
train()
File "/mnt/ssd/lcq/qlora-main/qlora.py", line 773, in train
**predictions = tokenizer.…
-
### Feature request / 功能建议
Adapter Tuning
Prompt Tuning
prefix tuning
p-tuning & P-tuning V2
LoRA & AdaLoRA & QLoRA
目前仅支持lora吗?
没找到除lora之外的其他微调模型的推理支持示例?
### Motivation / 动机
无
### Your c…
-
I cannot train Qwen2 7B on a 4090 GPU as it would result in out-of-memory (OOM) errors due to the loading of the embedding layer. This process is anticipated to demand over 27GB of VRAM, exceeding the…
-
### System Info
- `transformers` version: 4.40.0.dev0
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.17
- Python version: 3.8.2
- Huggingface_hub version: 0.20.2
- Safetensors version: 0…