-
### System Info
- `transformers` version: 4.36.2
- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.19
- Huggingface_hub version: 0.24.6
- Safetensors versi…
-
I train lora lm_head layer with peft we get lora adapter only for last layer, but this lora adapter seems to be dropped at inference because at conversion it has a wrong name (see #5 for code).
If…
-
First of all, a great thank you for sharing this model to the world!!!
Anyway, i've been trying to train my own model based off of this repo.
My objective of this training was to made use of un…
-
环境:win11,单卡 RTX 4070 (12G)
按照 Tutorial 操作
[Tutorial](https://github.com/InternLM/Tutorial/tree/main)/[xtuner](https://github.com/InternLM/Tutorial/tree/main/xtuner)
/README.md
2.3.6 将得到的 PTH 模型转换为…
-
Now that LoRA has been a very popular PEFT technique since Spring 2023 and LLaVA also offers it, what's the difference between PeFoMed and LLaVA?
-
### System Info
```shell
using Huggingface AMI from AWS marketplace with Ubuntu 22.04
optimum-neuron 0.0.25
transformers 4.45.2
peft 0.13.0
trl 0.11.4
accelerate 0.29.2
torch 2.1.2
```
…
-
# Description
Supports for PEFT for chatllama models and trainings
# TODO
- [x] Add PEFT to Enable Parameter efficient fine-tuning in actor, reward and critic models.
- [ ] Check RLHF stabil…
-
### System Info
```Shell
Please see
https://github.com/huggingface/peft/issues/484#issue-1718704717
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks…
-
### Describe the bug
There's something going on with the set_timesteps offset parameters on Stable Diffusion (1v4). The timesteps are set from 1->1000 instead from 0, and so it tries to index out of …
-
### System Info
```Shell
- `Accelerate` version: 1.0.1
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- `accelerate` bash location: /home/a/anaconda3/envs/trans/bin/a…