-
Hello!
I am trying to finetune either the vit_s or vit_b models to my dataset. I have tried training only the dino head, both the dino and ibot heads, and keeping the whole backbone frozen or unfr…
-
Hey guys,
I'm pretty new here just trying to figure all this out.
Finally managed to get my first finetuning running. But I'm kinda confused.
I'm using the thomas - medium model (german) for fine…
-
Hi, thanks for sharing this wonderful work. Since you use the multi-frame multi-view inputs during pretraining stage, I want to know whether did you still use the temporal multi-frame inputs during fi…
-
Hello everyone, below is my code for fine-tuning XTTS for a new language. It works well in my case with over 100 hours of audio.
https://github.com/nguyenhoanganh2002/XTTSv2-Finetuning-for-New-Lang…
-
**Research question**
Does BERT finetuning increase performance?
**Hypothesis**
Yes, the classifier in BERT will also pull apart the word embeddings belonging to specific locations, making it eas…
-
I have the following error when finetune the DocOwl1.5-Omni. It always raises error when index is 10. Please help!!!
```
File "/opt/conda/envs/mplug_owl2/lib/python3.10/site-packages/deepspeed/run…
-
Trying to use `generate_qa_embedding_pairs` method to create synthetic data.
`from llama_index.finetuning import generate_qa_embedding_pairs`
I run into an error:
```
---------------------…
-
# Title of the Talk: No Code SLM Finetuning with MonsterAPI
## Abstract of the Talk:
Dive into the world of no-code large language model (LLM) finetuning in this informative talk presented by Mons…
-
When finetuning the model I will have to create a dataset of some episodes.
Do you have any resources on how you recorded/created a RLDS dataset? It seems to be somewhat niche with little documentat…
-
That sounds massively interesting, and while we try to run inference and read the paper, should we expect the release of the finetuning code?