-
### Describe the bug
running latest build results in torch error
```
python server.py --api --listen --n-gpu-layers 32 --threads 8 --numa --tensorcores --trust-remote-code
```
...
```
Runtime…
-
After looking through the code it currently seems to not be possible to lead adapter models produced by `peft`. It would be a great addition to HF DLCs.
-
Traceback (most recent call last):
File "/ossfs/workspace/sft/sft_all.py", line 161, in
train()
File "/ossfs/workspace/sft/sft_all.py", line 125, in train
Traceback (most recent call last…
-
### System Info
Peft v0.13.2
Transformers v4.44.0
Accelerate v0.33.0
### Who can help?
Since this relates to an interaction with PEFT and Xlora maybe @BenjaminBossan @EricLBuehler
### …
-
Hi and thanks for the great resources.
I used "train-deploy-llama3.ipynb" and trained a similar Llama3 model as shown in the notebook.
I pushed my model on hugging face and now I want to use that …
-
### Name and Version
version: 4179 (25669aa9)
built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.4.0
### Operating systems
Mac
### Which llama.cpp modules do you kn…
-
**Describe the bug/ 问题描述 (Mandatory / 必填)**
我在用mindnlp.peft的LoRA微调ChatGLM3-6b时,训练过程中在lora的linear层报错TypeError。
- **Hardware Environment(`Ascend`/`GPU`/`CPU`) / 硬件环境**:
GPU
- **Software Environ…
-
### Describe the bug
We have a KeyError when the state dict goes to load into the transformer.
### Reproduction
```py
import torch
from diffusers.models import FluxTransformer2DModel
from …
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.8.4.dev0
- Platform: Linux-5.4.0-26-generic-aarch64-with-glibc2.31
- Python…
-
As reported by @ArthurZucker:
> Quick question, I am seeing this in peft: https://github.com/huggingface/peft/blob/f2b6d13f1dbc971c7653aa65e82822ea2d84bb38/src/peft/peft_model.py#L1665 where there …