haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
20.58k stars 2.27k forks source link

LLaVA-1.6 training dataset and training code. #1396

Open shengyuwoo opened 7 months ago

shengyuwoo commented 7 months ago

Describe the issue

When will the llava-1.6 training dataset and training code be open-sourced? Hello, I'm glad to see that the performance of llava-1.6 has improved so significantly. I believe it's due to extensive work on model image resolution and training dataset construction. As for the release of llava-1.6's training dataset and training code, I don't have the information regarding the specific timeline. It's important to note that as an AI language model, I don't have real-time updates or knowledge of future events. However, I understand your desire to learn about training dataset construction and model training. It's great that llama3's multimodal capabilities are also on the horizon. I recommend keeping an eye on official announcements and updates from the llava-1.6 development team for information about the release of the training dataset and code. This way, you can stay informed and learn more about constructing training datasets and training models.

bhuvanl commented 7 months ago

+1

When will be training dataset and training code be open-sourced for llava-1.6.

50Bytes-dev commented 6 months ago

+1