-
Getting ValueError: Unknown vision tower: google/siglip-so400m-patch14-384 on running https://github.com/LLaVA-VL/LLaVA-NeXT/blob/5fbcf27e32935f4e09d6b8b9f8abed4a572240b0/docs/LLaVA_OneVision_Tutorial…
-
Trying to deploy and run demo on a 4 A6000 cluster but it seemed that the runtime froze without any exceptions... Could there be any possible problems? Sorry for asking a naive question and thanks for…
-
when i run bash , error occurs
> bash scripts/video/demo/video_demo.sh /data/checkpoints/llama3-llava-next-8b vicuna_v1 32 2 average after no_token True /mnt/data/user/tc_agi/qmli/LLaVA-NeXT-inferenc…
-
I redownload this repo,and tried `transfoemers` version:`4.40.0.dev`、`4.40.0`、`4.41.2`,the result is still `['']`.
some thing i do include:
All weight i use is local weight.below is my change.
1. `…
-
Thanks for your great work! I'm wondering if u can share the loss curve for training llava-next-llama3? I've observed some different behaviour compared to training llava-next-vicuna-7b. I'm wondering …
-
# Common Issue
More Questions will be added......
## Training Related
Q : Can not finetune the existing LLaVA-Onevision checkpoints
A : We edit our model's config so that it is able to be se…
-
`bash playground/demo/interleave_demo.py --model_path path/to/ckpt`
The execute code should be run with python not bash.
And How can this code specify the input image sequence? It appears to be jus…
-
I encountered an issue when installing the LLaVA Next dependency, and I couldn't find this package from pypi at all.Even if this package is commented out in the requirement (byted remote ikernel==0.4.…
-
For the llava-onevision model, the official video inference code does not modify the `image_aspect_ratio` parameter, resulting in the use of the default `anyres_max_9`. This causes the `image_features…
-
Hey all!
The video models are all supported in Transformers now and will be part of the v4.42 release. Feel free to check out the model checkpoints [here](https://huggingface.co/collections/llava-h…