haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.37k stars 2.13k forks source link

KeyError: 'llava_llama' #1261

Open rohithbojja opened 6 months ago

rohithbojja commented 6 months ago

(llavalora) PS F:\finetune_LLaVA> python llava\eval\run_llava.py --model-path C:\Users\rohit\Downloads\Compressed\llama-2-7b-chat-task-qlora_5\workspace\LLaVA\model-lora\llama-2-7b-chat-task-qlora --model-base C:\models\llava-v1.5-7b --image-file "C:\datasets\evqa-rad\images\34bc8f62-7dc1-4491-ab51-5b521f2acfe5.jpg" --query "describe this image" bin C:\ProgramData\miniconda3\envs\llavalora\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll Traceback (most recent call last): File "F:\finetune_LLaVA\llava\eval\run_llava.py", line 157, in eval_model(args) File "F:\finetune_LLaVA\llava\eval\run_llava.py", line 56, in eval_model tokenizer, model, image_processor, context_len = load_pretrained_model( File "C:\ProgramData\miniconda3\envs\llavalora\lib\site-packages\llava\model\builder.py", line 50, in load_pretrained_model lora_cfg_pretrained = AutoConfig.from_pretrained(model_path) File "C:\ProgramData\miniconda3\envs\llavalora\lib\site-packages\transformers\models\auto\configuration_auto.py", line 998, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "C:\ProgramData\miniconda3\envs\llavalora\lib\site-packages\transformers\models\auto\configuration_auto.py", line 710, in getitem raise KeyError(key) KeyError: 'llava_llama'

awzhgw commented 5 months ago

i get same error . how toresolve it ?

Celtic-sf commented 3 months ago

I solved it by installing latest llava and transformers

dhruv10xd commented 3 days ago

I am facing the same error when i try to load my custom finetuned 7b lora model. Were you able to resolve it?