huangb23 / VTimeLLM

[CVPR'2024 Highlight] Official PyTorch implementation of the paper "VTimeLLM: Empower LLM to Grasp Video Moments".
https://arxiv.org/pdf/2311.18445.pdf
Other
226 stars 11 forks source link

KeyError: 'VTimeLLMConfig' #40

Closed zipMunk closed 1 month ago

zipMunk commented 1 month ago

I encountered an error when I try to run the offline_demo by this instruction: python -m vtimellm.inference --model_base <path to the Vicuna v1.5 weights>

I choose the weights from vtimellm-vicuna-v1-5-7b-stage3.

These are messages:

Loading VTimeLLM from base model... Traceback (most recent call last): File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/amax/data/maopengzhe/projects/VTimeLLM/vtimellm/inference.py", line 78, in tokenizer, model, context_len = load_pretrained_model(args, args.stage2, args.stage3) File "/amax/data/maopengzhe/projects/VTimeLLM/vtimellm/model/builder.py", line 34, in load_pretrained_model tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False) File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 663, in getitem model_type = self._reverse_config_mapping[key.name] KeyError: 'VTimeLLMConfig'

It seems that There are some errors about the base model. How can I solve the problem?