-
Hi @loubnabnl, thanks for this great repo.
I've seen a blog from the VMware OCTO, which described their works on fine-tuning **star-coder**, but `modified the code provided by the [SantaCoder](http…
-
I've been looking at tuning the drivers and [according to the klipper docs, hold_current should no longer be used](https://www.klipper3d.org/TMC_Drivers.html#prefer-to-not-specify-a-hold_current). How…
-
Paper: Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
link: https://arxiv.org/pdf/2306.14565.pdf
Name: LRV-Instruction
Focus: Multimodal
Notes: A benchmark to e…
-
Hi @baifanxxx :
I'm encountering an issue where the forward pass of the `SegVol` class hangs when the `image` is passed to `image_encoder`, resulting in NCCL communication timeouts in finetuning with…
-
### Description
I encountered an error while trying to fine-tune the llama3 model using unsloth. The error occurs during the `trainer.train()` step, and it appears to be related to a missing Python…
-
Review [txtinstruct](https://github.com/neuml/txtinstruct) is a great library for instruction fine tuning.
Just wondering about saving the instruction fine-tuned model? Please help me with this.
-
Dear authors,
Thanks for your work! I am interested in applying it in my study. I wonder could you provide the fine-tuned WizardCoder model file, which could be ready-for-use. Or could you pleas…
-
Paper : [https://arxiv.org/pdf/2406.16860](https://arxiv.org/pdf/2406.16860)
Website : [https://cambrian-mllm.github.io](https://cambrian-mllm.github.io)
Code : [https://github.com/cambrian-mllm/cam…
-
Lit-gpt has superb libraries for instruction tuning. But instruction tuned models are popularly never used for eval using lm eval harness.
MT benchmark: https://lmsys.org/blog/2023-06-22-leaderboard…
-
Hello !
Great project, so much easy to understand/hack than the microsoft one
Is there any plan to support the prompt tuning feature ? https://microsoft.github.io/graphrag/posts/prompt_tuning/ov…