-
Add support for PEFT models
## Description
Currently, only models corresponding to the `PreTrainedModel` instance are supported. It would be useful to add support for models using Parameter-Effi…
-
In the HF Checkpointer, we warn the user that the adapter weights can't be converted to the PEFT format and will be converted to a torchtune format, but then we never save the adapter. [code](https://…
-
**Describe the bug**
When I run speed generation benchmark I get the following message:
> INFO - You passed a model that is compatible with the Marlin int4*fp16 GPTQ kernel but use_marlin is Fal…
-
The peft code snippet loads a `PeftConfig` parameter which is not used anywhere, I think it would be better to remove that since the entire script functions without it.
cc @BenjaminBossan if you have…
-
### 🐛 Describe the bug
When I try to use multi-gpu training with accelerate I get an error.
Code:
```
import trlx
from peft import LoraConfig, TaskType
from trlx.data.configs import (
Mod…
-
While attempting to set up and run the demo notebook from the repository, I encountered multiple issues related to environment setup, package dependencies, and code configurations that significantly h…
-
(Q)DoRA, an alternative to (Q)LoRA is quickly proving to be a superior technique in terms of closing the gap between FFT and PEFT.
Known existing implementations:
- https://github.com/huggingface/…
-
I use the code from fingpt benckmark to evaluate my peft model on datasets:fpb/fiqa/tfns respectively.
However the inference speed is very slow, and increasing batch size doesn't help with this issu…
-
Hi,
thank you for this inspiring work!
I'm reproducing the reported results in the paper for the GLUE benchmark with DeBERTa-v3-base and peft. Here is my settings:
```
## For OFT with b=16…
-
When trying to load this PEFT Lora model I have multiple issues.
https://www.dropbox.com/scl/fi/30y9yn26ao8pnwch7z1ex/test_lora.zip?rlkey=r6kvgzwvrqm9tnw4jz8ctgu2f&st=x8r69pb3&dl=0
The first is th…