I encountered an error when I try to run the offline_demo by this instruction:
python -m vtimellm.inference --model_base <path to the Vicuna v1.5 weights>
I choose the weights from vtimellm-vicuna-v1-5-7b-stage3.
These are messages:
Loading VTimeLLM from base model...
Traceback (most recent call last):
File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/amax/data/maopengzhe/projects/VTimeLLM/vtimellm/inference.py", line 78, in
tokenizer, model, context_len = load_pretrained_model(args, args.stage2, args.stage3)
File "/amax/data/maopengzhe/projects/VTimeLLM/vtimellm/model/builder.py", line 34, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 663, in getitem
model_type = self._reverse_config_mapping[key.name]
KeyError: 'VTimeLLMConfig'
It seems that There are some errors about the base model. How can I solve the problem?
I encountered an error when I try to run the offline_demo by this instruction:
python -m vtimellm.inference --model_base <path to the Vicuna v1.5 weights>
I choose the weights from vtimellm-vicuna-v1-5-7b-stage3.
These are messages:
Loading VTimeLLM from base model... Traceback (most recent call last): File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/amax/data/maopengzhe/projects/VTimeLLM/vtimellm/inference.py", line 78, in
tokenizer, model, context_len = load_pretrained_model(args, args.stage2, args.stage3)
File "/amax/data/maopengzhe/projects/VTimeLLM/vtimellm/model/builder.py", line 34, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_base, use_fast=False)
File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 718, in from_pretrained
tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
File "/amax/data/maopengzhe/envs/vtimellm/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 663, in getitem
model_type = self._reverse_config_mapping[key.name]
KeyError: 'VTimeLLMConfig'
It seems that There are some errors about the base model. How can I solve the problem?