-
-
Hi,
I'm trying to finetune this for a regression problem with continuous labels. For that, I changed the 'num_labels' to 1 in the model as follows.
```
model = transformers.AutoModelForSequenceC…
-
# Out-of-Domain Finetuning to Bootstrap Hallucination Detection
How to use open-source, permissive-use data and collect less labeled samples for our tasks.
[https://eugeneyan.com/writing/finetuning/…
-
Hey,
would would be the best way to use such a great finetuned model like:
https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer
and finetune it for my task. My task …
-
After I finetuned docvq following guideline:
![Train](https://github.com/clovaai/donut/assets/10350878/db44f43f-d730-4ff6-ab75-8a18068f1940)
I found in "result" folder and there are some files lik…
-
Thanks for your great job!When will open source finetune code?
-
### Question
First of all, great work and thank you so much for open-source it! I wonder if the stage 2 model(referred as ViP-LLaVA-Base) has been released anywhere? Maybe mucai/vip-llava-13b-pretr…
-
### Describe the issue
I have an LLM finetuned for a down-stream task using input-output pairs data(`X_train` - `Y_train`).
Now I plan to utilize llmlingua2 to compress `X_test` --> `X_test_compre…
-
Hello, I can finetune the model with bs=1 for training. But in the inference stage, I set the bs=1 and it is out of memory, which is quite confusing. Is there any parameters that I forget to set?
-
Dear author, I have used the pretrain_full.pt you provided to sequentially execute the finetune.sh and generate.sh scripts, and the results obtained are as follows. Is there anything wrong with the tr…