haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.33k stars 2.13k forks source link

Could you please support Llama3 in Llava ? #1427

Open awzhgw opened 5 months ago

awzhgw commented 5 months ago

feature

Could you please support Llama3 in Llava ?

mmaaz60 commented 4 months ago

Hi @awzhgw & @everyone,

I hope you are doing well. We have just released our project LLaVA++: Extending Visual Capabilities with LLaMA-3 and Phi-3, which features LLaMA-3 and Phi-3-Mini based LLaVA models. Please have a look at it at LLaVA++.

I hope this would be helpful. Please let me know if you have any questions. Thanks

hhaAndroid commented 4 months ago

Here:

https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336 https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336