Open hasanar1f opened 1 week ago
Hi, can you check what error occur when you run from lmms_eval.models.llava_hf import LlavaHf
. This usually relates to environment problems
from lmms_eval.models.llava_hf import LlavaHf Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Error importing reka: No module named 'reka' Error importing flash_attn in mplug_owl. Please install flash-attn first.
Installing reka-api resolved the first error. However, the second error is still there. I already install flash-attn!
Hi, can you pull again the main branch again? I don't know why the llava_hf was being removed from the registry. I have added it back.
For the flash_attn in mplug_owl, you don't need to take care of it. It does not affect the inference of the whole pipeline using flash-attn nor the mplug_owl. I think it is some kind of version problem for mplug_owl
I am getting this error:
ValueError: Attempted to load model 'llava_hf', but no model for this name found! Supported model names: llava, qwen_vl, fuyu, batch_gpt4, gpt4v, instructblip, minicpm_v, claude, qwen-vl-api, llava_sglang, idefics2, internvl, gemini_api, reka, from_log, phi3v
When I execute the following:
python3 -m accelerate.commands.launch \ --num_processes=1 \ -m lmms_eval \ --model llava_hf \ --model_args pretrained="llava-hf/llava-1.5-7b-hf" \ --tasks mmmu \ --batch_size 1 \ --log_samples \ --log_samples_suffix llava_v1.5_mmmu \ --output_path ./logs/
It looks like only the llava_hf model is not registered in lmms-eval. Possible fix?
Thanks