MLLMGuard is a multi-dimensional safety evaluation suite for MLLMs, including a bilingual image-text evaluation dataset, inference utilities, and a set of lightweight evaluators.
[2024-06-06]We release the safety evaluation suite MLLMGuard.
git clone https://github.com/Carol-gutianle/MLLMGuard.git
conda create -n guard python=3.10
pip install -r requirements.txt
cd MLLMGuard
# create new folders
mkdir {data, results, logs}
Please put the downloaded data under the data
folder. The format of the data
folder is:
----data
|-privacy
|-bias
|-toxicity
|-hallucination
|-position-swapping
|-noise-injection
|-legality
We put our data on Huggin Face Website
The currently available open-source dataset is the MLLMGuard(Public) split, which contains 1,500 samples and has been de-sensitized. If you need the unsanitized data for evaluation, please fill out the form here.
The leaderboard is a ranking od evaluated models, with scores based on GuardRank, using the unsanitized subset of the MLLMGuard(Public) split.
You can view the latest leaderboard on the Hugging Face Space.
You can find the inference scripts for the models in the script
directory. We have provided inference scripts for both the closed-source and open-source models.
We have provided an evaluation script evaluate_api.sh
for the closed-source mdoels. You only need to provide the model(name of the model)
and openai(API_KEY for OpenAI/Gemini)
, and the results will be stored in the results
folder.
The following are the models that have been implemented, along with their corresponding nicknames in the apis
folder:
We also provide an evaluation script eval.sh
for the open-source models. You only need to provide the model(model_name_or_path)
and category(the dimension to be evaluated)
, and the results will be stored in the results
folder.
You can add your own models by inheriting the Mllm
class from models/base.py
and overriding the __init__
and evaluate
functions. Remeber to add the interface for your own custom model in evaluate.py as well. You can refer to the example in cogvlm.py
and the code snippet below:
elif 'cogvlm' in model_name:
from models.cogvlm import CogVLM
cogvlm = CogVLM(args.model, args.tokenizer)
evaluate_model(cogvlm, args, data)
You can use GuardRank for quick scoring. Please refer to GuardRank for the process.
If you think this evaluation suite is helpful, please cite the paper.
@misc{gu2024mllmguard,
title={MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models},
author={Tianle Gu and Zeyang Zhou and Kexin Huang and Dandan Liang and Yixu Wang and Haiquan Zhao and Yuanqi Yao and Xingge Qiao and Keqing Wang and Yujiu Yang and Yan Teng and Yu Qiao and Yingchun Wang},
year={2024},
eprint={2406.07594},
archivePrefix={arXiv},
primaryClass={cs.CR}
}