-
Thanks for the great work!
I am really interested in finetuning this on my own data, but I am not quite sure, if it is possible.
Can I also finetune the model on instance segmentation data? Semantic …
-
"I only have one A800 80G; can I fine-tune the DiT model with it?"
-
hi,
what context size and devset size do you think is reasonable for the e2e finetuning step given that I have 1 gpu with 48GB?
thank you so much
-
Hi! First of all, thank you for your great work!
Do you plan making lora finetuning possible?
-
Hello,
thank you for making this public, nice results considering the model size, may I ask you:
1.) How many hours of audio were in the dataset you used for training?
2.) Do you plan to rele…
-
Hi all, I am trying to fine-tune models in extremely long contexts.
I've tested the training setup below, and I managed to finetune:
- llama3.1-1B with a max_sequence_length of 128 * 1024 tokens
…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
Hi! thank you for sharing this awesome training codes!
I was trying to train my custom tagger, and in the codebase there is a pre-train model training and finetuning traing,
is there any differe…
-
Hello
How are you?
Thanks for contributing to this project.
Is it possible to fine-tune Whisper model for multiple languages?
-
Hello, I am very curious about how long the pretraining will take? I run the finetuning on two 4090 with 10w epoch, which takes almost three days. What types of GPU do you use?
The 10w epoch finetu…