Open ShuyUSTC opened 2 months ago
This bug is caused by the inconsistency between ckpt version and code version. We have fix this bug in CKPT. Please redownload CKPT.
Another question:
When loading pipeline using:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("visual-question-answering", model="DAMO-NLP-SG/VideoLLaMA2-7B")
Transformers returns the following traceback:
ValueError: The checkpoint you are trying to load has model type `videollama2_mistral` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Our model is not integrated into transformers. So, pipeline style inference is not supported now.
Hi Teams,
I'm trying to evaluate VideoLLaMA2 on MVBench. As I run the inference_video_mcqa_mvbench.py, the following traceback occurs:
I find that the
processor
in https://github.com/DAMO-NLP-SG/VideoLLaMA2/blob/42bf9fe09656f0a155d96db77178fb74ccc9828d/videollama2/model/__init__.py#L193-L208 is initialized asNone
. Formodel_type=mistral
inconfig.json
of VideoLLaMA2-7B and VideoLLaMA2-7B-16F, theprocessor
keeps as None, which may cause the traceback above. Could you please help me address the problem? Thanks!