TRI-ML / vlm-evaluation

VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
Other
89 stars 10 forks source link

Infer llava model_dir if model_id is given. #1

Closed lukaemon closed 9 months ago

lukaemon commented 9 months ago

Was running python scripts/evaluate.py --model_family llava-v15 --model_id llava-v1.5-7b --model_dir liuhaotian/llava-v1.5-7b --dataset.type text-vqa-slim --dataset.root_dir /home/ubuntu/datasets/vlm-evaluation

Can't omit model_dir even model_id is provided. Small tweak to fix it.