-
Can we deploy LLaVA-NeXT on a MacBook(M2)?Please provide an example.
-
Any chance we could see a variant of each produced with the Llava 1.6 architecture? Thanks
-
Vision Text Alignment的代码什么时候能发布出来,期待用自己的数据sft
-
Hi, Dear author:
It seems the llava-next is really insightful exploreing work. Please kindly release the training and inference code asap, thank you very much.
-
Hello LLaVa-NeXT team!
I want to clarify some points about the AnyRes technique and how the image feature is unpadded in modeling forward.
As this [issue](https://github.com/huggingface/transfor…
-
Building on the amazing work by @mzbac and @nkasmanoff in https://github.com/ml-explore/mlx-examples/pull/461, I'd really love an example of how LLaVA 1.6 (aka llava next) can be fine-tuned with a LoR…
-
Hi, I got a training curve like this, is it normal? Do you mind sharing your trainer_state.json? thx!
-
I run scripts/video/eval/activitynet_eval.sh, but there are no llavavid/eval in the project
-
What should I specify as the `model_type` in the JSON file?
from transformers import AutoModel
model = AutoModel.from_pretrained("zxhezexin/openlrm-obj-base-1.1")
ValueError: Unrecogniz…
-
I wonder what the "32K" signifies when using the "lmms-lab/LLaVA-NeXT-Video-7B-32K" checkpoint.