EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.02k stars 52 forks source link

Can not load llava_hf models since the new updates! #119

Open hasanar1f opened 1 week ago

hasanar1f commented 1 week ago

I am getting this error:

ValueError: Attempted to load model 'llava_hf', but no model for this name found! Supported model names: llava, qwen_vl, fuyu, batch_gpt4, gpt4v, instructblip, minicpm_v, claude, qwen-vl-api, llava_sglang, idefics2, internvl, gemini_api, reka, from_log, phi3v

When I execute the following:

python3 -m accelerate.commands.launch \ --num_processes=1 \ -m lmms_eval \ --model llava_hf \ --model_args pretrained="llava-hf/llava-1.5-7b-hf" \ --tasks mmmu \ --batch_size 1 \ --log_samples \ --log_samples_suffix llava_v1.5_mmmu \ --output_path ./logs/

It looks like only the llava_hf model is not registered in lmms-eval. Possible fix?

Thanks

kcz358 commented 6 days ago

Hi, can you check what error occur when you run from lmms_eval.models.llava_hf import LlavaHf. This usually relates to environment problems

hasanar1f commented 6 days ago

from lmms_eval.models.llava_hf import LlavaHf Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Error importing reka: No module named 'reka' Error importing flash_attn in mplug_owl. Please install flash-attn first.

hasanar1f commented 6 days ago

Installing reka-api resolved the first error. However, the second error is still there. I already install flash-attn!

kcz358 commented 6 days ago

Hi, can you pull again the main branch again? I don't know why the llava_hf was being removed from the registry. I have added it back.

For the flash_attn in mplug_owl, you don't need to take care of it. It does not affect the inference of the whole pipeline using flash-attn nor the mplug_owl. I think it is some kind of version problem for mplug_owl