[Error importing reka,flashattention when evaluate llava]
when I try to evaluate the llava-v1.5-7b, I use this command:
python -m accelerate.commands.launch \
--num_processes=1 \
-m lmms_eval \
--model llava \
--model_args pretrained="xxx/checkpoints/llava-v1.5-7b" \
--tasks mme,mmbench_en \
--batch_size 1 \
--log_samples \
--log_samples_suffix llava_v1.5_mme_mmbenchen \
--output_path ./logs/
but i get this error, the this program get stuck:
Error importing reka: No module named 'reka.client'
Error importing flash_attn in mplug_owl. Please install flash-attn first
I have install this two module
(lmms-eval) ➜ llava pip show reka
Name: reka
Version: 0.1
Summary: REKA file parser
Home-page:
Author: David Balogh
Author-email: balogh.david@gmail.com
License: MIT
Location: /xxx/.conda/envs/lmms-eval/lib/python3.9/site-packages
Requires:
Required-by:
(lmms-eval) ➜ llava pip show flash-attn
Name: flash-attn
Version: 2.5.9.post1
Summary: Flash Attention: Fast and Memory-Efficient Exact Attention
Home-page: https://github.com/Dao-AILab/flash-attention
Author: Tri Dao
Author-email: trid@cs.stanford.edu
License:
Location: /xxx/.conda/envs/lmms-eval/lib/python3.9/site-packages
Requires: einops, torch
Required-by:
[Error importing reka,flashattention when evaluate llava]
when I try to evaluate the llava-v1.5-7b, I use this command: python -m accelerate.commands.launch \ --num_processes=1 \ -m lmms_eval \ --model llava \ --model_args pretrained="xxx/checkpoints/llava-v1.5-7b" \ --tasks mme,mmbench_en \ --batch_size 1 \ --log_samples \ --log_samples_suffix llava_v1.5_mme_mmbenchen \ --output_path ./logs/
but i get this error, the this program get stuck: Error importing reka: No module named 'reka.client' Error importing flash_attn in mplug_owl. Please install flash-attn first
I have install this two module (lmms-eval) ➜ llava pip show reka Name: reka Version: 0.1 Summary: REKA file parser Home-page: Author: David Balogh Author-email: balogh.david@gmail.com License: MIT Location: /xxx/.conda/envs/lmms-eval/lib/python3.9/site-packages Requires: Required-by: (lmms-eval) ➜ llava pip show flash-attn Name: flash-attn Version: 2.5.9.post1 Summary: Flash Attention: Fast and Memory-Efficient Exact Attention Home-page: https://github.com/Dao-AILab/flash-attention Author: Tri Dao Author-email: trid@cs.stanford.edu License: Location: /xxx/.conda/envs/lmms-eval/lib/python3.9/site-packages Requires: einops, torch Required-by: