-
I am trying to use the llama3-llava-next-8b model, and I replaced --model-path with the local path of llama3-llava-next-8b that I downloaded.
When I run python -m llava.serve.model_worker --host 0.0…
-
Thank you for your excellent work.
I believe llava-1.6 currently supports 7b/13b models, but
do you have any plans to expand this to larger models (such as llava-hf/llava-v1.6-34b-hf, llava-hf/llama…
-
Thanks for your great work! I'm wondering if u can share the loss curve for training llava-next-llama3? I've observed some different behaviour compared to training llava-next-vicuna-7b. I'm wondering …
-
Great work! I notice the LLaVA-NeXT-Qwen2 (image model) can achieve a surprising 49.5 Video-MME results. In contrast, the LLaVA-NeXT-Video (Llama3) can only achieve a 30+ Video-MME score (according to…
-
- [x] MiniCPM-Llama3-V-2_5
- [x] Florence 2
- [x] Phi-3-vision
- [x] Bunny
- [x] Dolphi-vision-72b
- [x] Llava Next
- [ ] Idefics 3
- [ ] Llava Interleave
- [ ] Llava onevision
- [ ] internlm…
-
when i run bash , error occurs
> bash scripts/video/demo/video_demo.sh /data/checkpoints/llama3-llava-next-8b vicuna_v1 32 2 average after no_token True /mnt/data/user/tc_agi/qmli/LLaVA-NeXT-inferenc…
-
Prompt outputs failed validation
OllamaGenerateAdvance:
- Value not in list: model: 'impactframes/llama3_ifai_sd_prompt_mkr_q4km:latest' not in ['llama3:8b-instruct-q4_K_M', 'llama3', 'phi3:3.8b-min…
-
In the project: https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336, it gives an examples how to convert llava-llama3 model to hf format:
`
…
-
Thank you for your great work; I appreciate it!
I want to use the new version of Llava (Specifically, llama3-llava-next-8b, which you can download checkpoint here: https://huggingface.co/lmms-lab/l…
-
It would be nice if it could be possible to pull multiple models in one go in Ollama.
Today, I tried to run
```
ollama pull llava-phi3 llava-llama3 llama3-gradient phi3 moondream codeqwen
```…