-
Thank you for your excellent work.
I believe llava-1.6 currently supports 7b/13b models, but
do you have any plans to expand this to larger models (such as llava-hf/llava-v1.6-34b-hf, llava-hf/llama…
-
Thanks for your great work! I'm wondering if u can share the loss curve for training llava-next-llama3? I've observed some different behaviour compared to training llava-next-vicuna-7b. I'm wondering …
-
Great work! I notice the LLaVA-NeXT-Qwen2 (image model) can achieve a surprising 49.5 Video-MME results. In contrast, the LLaVA-NeXT-Video (Llama3) can only achieve a 30+ Video-MME score (according to…
-
when i run bash , error occurs
> bash scripts/video/demo/video_demo.sh /data/checkpoints/llama3-llava-next-8b vicuna_v1 32 2 average after no_token True /mnt/data/user/tc_agi/qmli/LLaVA-NeXT-inferenc…
-
- [x] MiniCPM-Llama3-V-2_5
- [x] Florence 2
- [x] Phi-3-vision
- [x] Bunny
- [x] Dolphi-vision-72b
- [x] Llava Next
- [ ] Idefics 3
- [ ] Llava Interleave
- [ ] Llava onevision
- [ ] internlm…
-
Thank you for your great work; I appreciate it!
I want to use the new version of Llava (Specifically, llama3-llava-next-8b, which you can download checkpoint here: https://huggingface.co/lmms-lab/l…
-
In the project: https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336, it gives an examples how to convert llava-llama3 model to hf format:
`
…
-
![image](https://github.com/user-attachments/assets/934b8b35-c514-4b62-ae83-3eb83e3b13ff)
-
It would be nice if it could be possible to pull multiple models in one go in Ollama.
Today, I tried to run
```
ollama pull llava-phi3 llava-llama3 llama3-gradient phi3 moondream codeqwen
```…
-
I have been fine-tuning the llava-llama3-8b-v1_1 model on my own dataset using the llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune_copy.py script. While the training phase p…
J0eky updated
2 months ago