haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
18.87k stars 2.07k forks source link

[Usage] Error "KeyERROR: "LlavaMistralConfig"" in reasoning #1068

Open a2382625920 opened 6 months ago

a2382625920 commented 6 months ago

Describe the issue

Issue:

I downloaded llava-v1.6-mistral-7b locally, and this "LlavaMistralConfig" problem appeared when reasoning about llava-v1.6-mistral-7b, I don't know the exact reason for this, I am following the normal startup order of the web_demo to start up this new model, but it doesn't work, in the With LLAVA-v1.5 it reasoned correctly!

a2382625920 commented 6 months ago

Traceback (most recent call last): File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/llava/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/root/siton-glusterfs-eaxtsxdfs/hzt/projects/LLaVA-main_main/llava/serve/cli.py", line 130, in main(args) File "/root/siton-glusterfs-eaxtsxdfs/hzt/projects/LLaVA-main_main/llava/serve/cli.py", line 32, in main tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, args.load_8bit, args.load_4bit, device=args.device) File "/root/siton-glusterfs-eaxtsxdfs/hzt/projects/LLaVA-main_main/llava/model/builder.py", line 108, in load_pretrained_model tokenizer = AutoTokenizer.from_pretrained(model_path) File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 803, in from_pretrained tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] File "/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 737, in getitem model_type = self._reverse_config_mapping[key.name] KeyError: 'LlavaMistralConfig'