-
Hi, I am using sglang to deploy llava-next-interleave-qwen-7b, but i found there is no preprocessor_config.json for llava-next-interleave-qwen-7b model, could we add this to huggingface? Or do we hav…
-
Thank you for your great work; I appreciate it!
I want to use the new version of Llava (Specifically, llama3-llava-next-8b, which you can download checkpoint here: https://huggingface.co/lmms-lab/l…
-
LLaVA 1.6 Next: https://llava-vl.github.io/blog/2024-01-30-llava-next/
some benchmark results of 13B ver. are also available.
-
-
Hi LLaVA-NeXT team,
Will there be official support for llava-hf versions for the new LLaVA-NeXT (2024-05 Release) models soon?
-
### Question
Hi, great work! I want to download the llava-next stronger checkpoint, but the website (https://huggingface.co/collections/lmms-lab/llava-next-6623288e2d61edba3ddbf5ff) meets 404 Error.
-
Hi, Thanks to your Solid work!I want to know how to calculate the R-GAE maps,especially the Query-to-patch. Could you please supply some key codes.
-
Hi~,I am recently trying to use the llava_onevision model, I try to follow the onevision tutorial, which seems pretty easy. I run the program exactly as the tutorial, the model is 0.5b_si. However, a …
-
Rencently, many MLLM works on both image and video understanding achieve great results on video benchmarks. e.g. LLaVA-Next, InternLM, Vila, etc
I think these works should also be added to the paper …
-
### System Info
-GPU A100
NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2
NVIDIA A100-SXM4-80GB
### Who can help?
@byshiue @kaiyux
### Information
- [X] …