-
Thanks for the great work!
I am really interested in finetuning this on my own data, but I am not quite sure, if it is possible.
Can I also finetune the model on instance segmentation data? Semantic …
-
"I only have one A800 80G; can I fine-tune the DiT model with it?"
-
hi,
what context size and devset size do you think is reasonable for the e2e finetuning step given that I have 1 gpu with 48GB?
thank you so much
-
Hi! First of all, thank you for your great work!
Do you plan making lora finetuning possible?
-
Hi all, I am trying to fine-tune models in extremely long contexts.
I've tested the training setup below, and I managed to finetune:
- llama3.1-1B with a max_sequence_length of 128 * 1024 tokens
…
-
### Search before asking
- [X] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussion…
-
Hi! thank you for sharing this awesome training codes!
I was trying to train my custom tagger, and in the codebase there is a pre-train model training and finetuning traing,
is there any differe…
-
Hello
How are you?
Thanks for contributing to this project.
Is it possible to fine-tune Whisper model for multiple languages?
-
Hi, I've a question on data collation for finetuning. I have some input questions and some targets, and wish to know if I need to include the inputs as part of my labels during causal finetuning. Spec…
-
From https://github.com/OpenAdaptAI/OmniParser/issues/3:
1. **Objective**:
- Implement fine-tuning for OmniParser’s YOLO model to enhance detection accuracy on small icons and UI elements.
2…