-
Hi, I have been trying to execute your code with the mentioned requirements. However, it seems like it is giving the error
`ImportError: cannot import name 'split_torch_state_dict_into_shards' from '…
-
我看repo并没有修改generate函数,是在评测时不需要task_type吗?这样会导致报错吧
希望能有eval的代码
```import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCau…
-
Hi,
I am trying to do inference using
```python
from transformers import AutoProcessor, BitsAndBytesConfig, Idefics2ForConditionalGeneration
import torch
peft_model_id = "idefics2-finetuned"
…
-
I am using a chat ML template like this to format prompts:
```
def format_conversation(examples):
conversations = examples['conversation']
texts = []
for convo in conversations:
…
-
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors…
-
Repro below minified from a torchtune model, still investigating:
```
import torch
def forward(
s0: "",
s1: "",
L_x_: "",
L_self_modules_sa_norm_parameters_scale_: "",
…
-
I prepared the csv, but in the next step I get this:
```
❯ python -m finetuning --dataset "custom_dataset" --custom_dataset.file "scripts/custom_dataset.py" --whatsapp_username "Jorge"
Tracebac…
-
### System Info
peft = 0.13.2
python = 3.12.7
transformers = 4.45.2
### Who can help?
@sayakpaul
I am using ```inject_adapter_model(...)``` to finetune a model from OpenCLIP using LoRA layers…
-
I need a way to evaluate a model like this:
https://huggingface.co/qblocks/falcon-7b-python-code-instructions-18k-alpaca
This is a finetuned model on base open source model falcon-7b for code gene…
-
I'm trying to peft it. And I have got some dataset, but they either too small or having too many headers to install. The install commands of different headers differ greatly.
So I wonder if you have…