VLMEvalKit (the python package name is vlmeval) is an open-source evaluation toolkit of large vision-language models (LVLMs). It enables one-command evaluation of LVLMs on various benchmarks, without the heavy workload of data preparation under multiple repositories. In VLMEvalKit, we adopt generation-based evaluation for all LVLMs, and provide the evaluation results obtained with both exact matching and LLM-based answer extraction.
The performance numbers on our official multi-modal leaderboards can be downloaded from here!
OpenVLM Leaderboard: Download All DETAILED Results.
Supported Image Understanding Dataset
Dataset | Dataset Names (for run.py) | Task | Dataset | Dataset Names (for run.py) | Task |
---|---|---|---|---|---|
MMBench Series: MMBench, MMBench-CN, CCBench |
MMBench_DEV_[EN/CN] MMBench_TEST_[EN/CN] MMBench_DEV_[EN/CN]_V11 MMBench_TEST_[EN/CN]_V11 CCBench |
Multi-choice Question (MCQ) |
MMStar | MMStar | MCQ |
MME | MME | Yes or No (Y/N) | SEEDBench Series | SEEDBench_IMG SEEDBench2 SEEDBench2_Plus |
MCQ |
MM-Vet | MMVet | VQA | MMMU | MMMU_[DEV_VAL/TEST] | MCQ |
MathVista | MathVista_MINI | VQA | ScienceQA_IMG | ScienceQA_[VAL/TEST] | MCQ |
COCO Caption | COCO_VAL | Caption | HallusionBench | HallusionBench | Y/N |
OCRVQA* | OCRVQA_[TESTCORE/TEST] | VQA | TextVQA* | TextVQA_VAL | VQA |
ChartQA* | ChartQA_TEST | VQA | AI2D | AI2D_TEST | MCQ |
LLaVABench | LLaVABench | VQA | DocVQA+ | DocVQA_[VAL/TEST] | VQA |
InfoVQA+ | InfoVQA_[VAL/TEST] | VQA | OCRBench | OCRBench | VQA |
RealWorldQA | RealWorldQA | MCQ | POPE | POPE | Y/N |
Core-MM- | CORE_MM | VQA | MMT-Bench | MMT-Bench_[VAL/VAL_MI/ALL/ALL_MI] | MCQ |
MLLMGuard - | MLLMGuard_DS | VQA | AesBench | AesBench_[VAL/TEST] | MCQ |
* We only provide a subset of the evaluation results, since some VLMs do not yield reasonable results under the zero-shot setting
+ The evaluation results are not available yet
- Only inference is supported in VLMEvalKit
VLMEvalKit will use an judge LLM to extract answer from the output if you set the key, otherwise it uses the exact matching mode (find "Yes", "No", "A", "B", "C"... in the output strings). The exact matching can only be applied to the Yes-or-No tasks and the Multi-choice tasks.
Supported Video Understanding Dataset
Dataset | Dataset Names (for run.py) | Task | Dataset | Dataset Names (for run.py) | Task |
---|---|---|---|---|---|
MMBench-Video | MMBench-Video | VQA |
Supported API Models
GPT-4v (20231106, 20240409) 🎞️🚅 | GPT-4o 🎞️🚅 | Gemini-1.0-Pro 🎞️🚅 | Gemini-1.5-Pro 🎞️🚅 | Step-1V 🎞️🚅 |
---|---|---|---|---|
Reka-[Edge / Flash / Core]🚅 | Qwen-VL-[Plus / Max] 🎞️🚅 | Claude3-[Haiku / Sonnet / Opus] 🎞️🚅 | GLM-4v 🚅 | CongRong 🎞️🚅 |
Claude3.5-Sonnet 🎞️🚅 |
Supported PyTorch / HF Models
🎞️: Support multiple images as inputs.
🚅: Models can be used without any additional configuration/operation.
Transformers Version Recommendation:
Note that some VLMs may not be able to run under certain transformer versions, we recommend the following settings to evaluate each VLM:
transformers==4.33.0
for: Qwen series
, Monkey series
, InternLM-XComposer Series
, mPLUG-Owl2
, OpenFlamingo v2
, IDEFICS series
, VisualGLM
, MMAlaya
, ShareCaptioner
, MiniGPT-4 series
, InstructBLIP series
, PandaGPT
, VXVERSE
, GLM-4v-9B
.transformers==4.37.0
for: LLaVA series
, ShareGPT4V series
, TransCore-M
, LLaVA (XTuner)
, CogVLM Series
, EMU2 Series
, Yi-VL Series
, MiniCPM-[V1/V2]
, OmniLMM-12B
, DeepSeek-VL series
, InternVL series
, Cambrian Series
.transformers==4.40.0
for: IDEFICS2
, Bunny-Llama3
, MiniCPM-Llama3-V2.5
, LLaVA-Next series
, 360VL-70B
, Phi-3-Vision
, WeMM
.transformers==latest
for: PaliGemma-3B
.# Demo
from vlmeval.config import supported_VLM
model = supported_VLM['idefics_9b_instruct']()
# Forward Single Image
ret = model.generate(['assets/apple.jpg', 'What is in this image?'])
print(ret) # The image features a red apple with a leaf on it.
# Forward Multiple Images
ret = model.generate(['assets/apple.jpg', 'assets/apple.jpg', 'How many apples are there in the provided images? '])
print(ret) # There are two apples in the provided images.
See [QuickStart | 快速开始] for a quick start guide.
To develop custom benchmarks, VLMs, or simply contribute other codes to VLMEvalKit, please refer to [Development_Guide | 开发指南].
The codebase is designed to:
generate_inner()
function, all other workloads (data downloading, data preprocessing, prediction inference, metric calculation) are handled by the codebase.The codebase is not designed to:
If you find this work helpful, please consider to star🌟 this repo. Thanks for your support!
If you use VLMEvalKit in your research or wish to refer to published OpenSource evaluation results, please use the following BibTeX entry and the BibTex entry corresponding to the specific VLM / benchmark you used.
@misc{2023opencompass,
title={OpenCompass: A Universal Evaluation Platform for Foundation Models},
author={OpenCompass Contributors},
howpublished = {\url{https://github.com/open-compass/opencompass}},
year={2023}
}