-
I wonder where is the instruction/finetuning data such that you can use to tune LLM?
-
Example false evaluation:
```
'question_id': '20128',
'q_type_id': 2,
'question': 'What is visible in the image besides the sea?'
'gt': 'Building'
'prediction': 'Trees'
'open_prediction': In t…
-
- [ ] [LLaVA/README.md at main · haotian-liu/LLaVA](https://github.com/haotian-liu/LLaVA/blob/main/README.md?plain=1)
# LLaVA/README.md at main · haotian-liu/LLaVA
## 🌋 LLaVA: Large Language and Vi…
-
Hi, Thanks for the amazing code. I wonder when you plan to release the code for fine-tuning the V2? Also, do you plan to add Falcon fine-tuning?
Thanks
-
🙂🙏 感谢开源!
我用自己的数据训练之后效果还差了,帮忙看看什么问题呢,感谢先。
**1. 训练数据**
我的数据是一行一行的图片,然后合成了一张,多行(2~10行随机),共有1万张合成图片,图片是灰度图。
![output_document_1](https://github.com/user-attachments/assets/a266c966-6476-449…
-
First thanks for your great job!
Now We're trying to replace the vision encoder in llava, i.e., clip-l-336, with RADIO. Under the default LLaVA 1.5 settings, we pretrain a multimodal projection MLP a…
-
Hi,
For reasons of reproducibility, it would be great if you provided source code to reproduce the results on ScienceQA.
Thanks.
-
Hey unsloth team, beautiful work being done here.
I am the author of [MachinaScript for Robots](https://github.com/babycommando/machinascript-for-robots) - a framework for building LLM-powered robo…
-
## 创新更多发生在科研网络的边缘
* paper: Innovations are disproportionately likely in the periphery of a scientific network [[paper link](https://link.springer.com/article/10.1007%2Fs12064-021-00359-1)]
* 中文阅读:…
-
Hi ! I'm doing some RAG research, and I just found your guys' post on this cookbook tutorial. it was awesome. I want to translate it into Chinese. I'm not sure it is ok.