EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.36k stars 103 forks source link

KeyError: 'llavavid' #197

Closed guoyanan1g closed 1 month ago

guoyanan1g commented 1 month ago
Error during evaluation: Attempted to load model 'llava_vid', but no model for this name found! Supported model names: claude, from_log, fuyu, gemini_api, gpt4v, instructblip, internvl, internvl2, llama_vid, llava, llava_hf, llava_sglang, longva, mantis, minicpm_v, phi3v, qwen_vl, qwen-vl-api, reka, tinyllava, xcomposer2_4khd, xcomposer2d5
Traceback (most recent call last):
  File "/home/guoyanan.gyn/gyn/lmms-eval/lmms_eval/api/registry.py", line 29, in get_model
    return MODEL_REGISTRY[model_name]
KeyError: 'llavavid'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/guoyanan.gyn/gyn/lmms-eval/lmms_eval/__main__.py", line 202, in cli_evaluate
    results, samples = cli_evaluate_single(args)
  File "/home/guoyanan.gyn/gyn/lmms-eval/lmms_eval/__main__.py", line 298, in cli_evaluate_single
    results = evaluator.simple_evaluate(
  File "/home/guoyanan.gyn/gyn/lmms-eval/lmms_eval/utils.py", line 434, in _wrapper
    return fn(*args, **kwargs)
  File "/home/guoyanan.gyn/gyn/lmms-eval/lmms_eval/evaluator.py", line 96, in simple_evaluate
    lm = lmms_eval.api.registry.get_model(model).create_from_arg_string(
  File "/home/guoyanan.gyn/gyn/lmms-eval/lmms_eval/api/registry.py", line 31, in get_model
    raise ValueError(f"Attempted to load model '{model_name}', but no model for this name found! Supported model names: {', '.join(MODEL_REGISTRY.keys())}")
ValueError: Attempted to load model 'llavavid', but no model for this name found! Supported model names: claude, from_log, fuyu, gemini_api, gpt4v, instructblip, internvl, internvl2, llama_vid, llava, llava_hf, llava_sglang, longva, mantis, minicpm_v, phi3v, qwen_vl, qwen-vl-api, reka, tinyllava, xcomposer2_4khd, xcomposer2d5

torch 2.1.2 transformers 4.39.2 accelerate 0.33.0

I have tried llava_vid and llavavid,both failed

floatingbigcat commented 3 weeks ago

Hi, I encountered the same issue, could you share your solution?

ZhaoyangLi-nju commented 2 weeks ago

+1

floatingbigcat commented 1 week ago

Hi @ZhaoyangLi-nju , I noticed that there is such a branch called "inference" under the LLaVA-NeXT, which includes the llavavid. I am not sure whether this is The solution though. but it works for me after I make the switch https://github.com/LLaVA-VL/LLaVA-NeXT/tree/inference/llavavid

Hi @jzhang38, Can I kindly ask for your confirmation about the right way to use llavavid?