-
```
───────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
Missing key(s) in…
-
I have a peft adapter model for a finetuned Falcon7b model, running it gives gibberish. The problem appears to be in get_generate_stream_function https://github.com/lm-sys/FastChat/blob/ae8abd20cbe182…
-
### 🐛 Describe the bug
From my understand, when saving checkpoints for peft models (see [here](https://github.com/CarperAI/trlx/blob/bcd237f1e94c84c5c9f5a4086bab34c0946e3fa7/trlx/trainer/accelerate…
-
`nproc_per_node=8
NPROC_PER_NODE=$nproc_per_node \
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
swift sft \
--model_type qwen1half-moe-a2_7b \
--model_id_or_path model_path \
--sft_type …
-
### Your current environment
Collecting environment information.
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
0S: Cent0S Li…
-
Let's say I have the following code in Python. How would I translate that to js?
````
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoToke…
-
A few options to explore
1. NVIDIA NeMo, TensorRT_LLM, Triton
- NeMo
Run [this Generative AI example](https://github.com/NVIDIA/GenerativeAIExamples/tree/main/models/Gemma
) to build Lora wi…
-
Currently, the lora format supported by this function is too single. Can it support lora fusion implemented by the [peft](https://github.com/huggingface/peft) library? in demo/Diffusion
such as:
```…
-
When I fine-tune llama2-7B with LoRa, the following error occurs:
Traceback (most recent call last):
File "/home/ubuntu/lora/alpaca-lora-main/finetune.py", line 290, in
fire.Fire(train)
F…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
- P…