-
### Question
1. could you explain the loss of llava 1.5 is higher than llava (I think both pretraining and Visual Instruction Tuning stage), but achieve better result?
2. also, why did the **spike**…
-
Dear Author,
Thank you for sharing your work on this project. I noticed that the repository currently doesn’t include the training code (train.py). I would greatly appreciate it if you could share …
-
Hi!
I am wondering can I finetune this model on my own molecules? Does any finetune interface supported?
I realized that there is a traning instruction. Could that be used for fine-tuning?
Tha…
-
it would be great to enable users to easily fine-tune base models for their unique downstream tasks
-
### Product Category
JumpStart
### Feedback Category
Clarity and Comprehensibility, Missing Examples and Tutorials
### Reference Link
_No response_
### Details
It would be benefic…
-
Hi,
I'm a bit confused. Should I use Gemma formatting tags during fine-tuning https://ai.google.dev/gemma/docs/formatting , or should I use this template: 'Instruction:\n{instruction}\n\nResponse:\n{…
-
Dear authors.
![屏幕截图 2024-10-28 210315](https://github.com/user-attachments/assets/2bcc3111-088e-4690-ae93-ecf3faf2f9ca)
For the Neural Language Instruction Tuning section, when I ran the trafficl…
-
- Paper name: Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor
- ArXiv Link: https://arxiv.org/abs/2212.09689
To close this issue open a PR with a paper report using t…
-
Hello, authors!
Thanks for your valuable work! I am trying to run `zero_shot.py` on my own dataset using the `PA-LLaVA` checkpoint. Could you please clarify which checkpoint the `--llava ./instruct…
lsnls updated
1 month ago
-
### Question
ShareGPT is used for instruction fine-tuning, with the aim of inserting data from image independent pure text conversations into multiple rounds of image conversations, so that the model…