-
I am trying to finetune llama3.2 Vision Instruct, and I am using the distributed recipe and example (lora) config as a starting point. Eventually, I am looking to use a custom dataset, but first, I am…
-
Thanks for your impressive work. But the dataset given in the Baidu disk doesn't fit the finetuning process. If I want to reprodce the results in your paper, should I download the official SUN RGB-D d…
-
Hello!
I am trying to finetune either the vit_s or vit_b models to my dataset. I have tried training only the dino head, both the dino and ibot heads, and keeping the whole backbone frozen or unfr…
-
Hey guys,
I'm pretty new here just trying to figure all this out.
Finally managed to get my first finetuning running. But I'm kinda confused.
I'm using the thomas - medium model (german) for fine…
-
### Describe the issue
**Issue:**
I ran into tokenization mismatch errors when I tried to fine-tune from Llama-3.1. I pre-trained a new MLP adapter for Llama-3.1 and that seems to work, but the fine…
-
That sounds massively interesting, and while we try to run inference and read the paper, should we expect the release of the finetuning code?
-
Good Job! Do you have a plan to support LoRA or other PEFT?
-
I was trying to finetune a model as mentioned in the docs, but after training, when I am trying to load the model, I get the following error:
```
[Error(s) in loading state_dict for SubwordBert:
s…
-
Hi,
Thanks for the great work. Is it possible to release the data (and codes) used to fine-tune the language model.
-
Hello,
I am trying to finetune the tapas_wtq_wikisql_sqa_masklm_medium_reset.
Just to see, if it works in general, I wanted to finetune it on the same data it's already trained on, WTQ. Creating …