MultiMedEval is a library to evaluate the performance of Vision-Language Models (VLM) on medical domain tasks. The goal is to have a set of benchmark with a unified evaluation scheme to facilitate the development and comparison of medical VLM. We include 24 tasks representing 10 different imaging modalities and some text-only tasks.
Representation of the modalities, tasks and datasets in MultiMedEval
To install the library, you can use pip
pip install multimedeval
To run the benchmark on your model, you first need to create an instance of the MultiMedEval
class.
from multimedeval import MultiMedEval, SetupParams, EvalParams
engine = MultiMedEval()
You then need to call the setup
function of the engine
. This will download the datasets if needed and prepare them for evaluation. You can specify where to store the data and which datasets you want to download.
setupParams = SetupParams(medqa_dir="data/")
tasksReady = engine.setup(setupParams=setupParams)
Here we initialize the SetupParams
dataclass with only the path for the MedQA dataset. If you omit to pass a directory for some of the datasets, they will be skipped during the evaluation. During the setup process, the script will need a Physionet username and password to download "VinDr-Mammo", "MIMIC-CXR" and "MIMIC-III". You also need to setup Kaggle on your machine before running the setup as the "CBIS-DDSM" is hosted on Kaggle. At the end of the setup process, you will see a summary of which tasks are ready and which didn't run properly and the function will return a summary in the form of a dictionary.
The user must implement one Callable: batcher
. It takes a batch of input and must return the answer.
The batch is a list of inputs.
Each input is a tuple of:
[
(
[
{"role": "user", "content": "This is a question with an image <img>."},
{"role": "assistant", "content": "This is the answer."},
{"role": "user", "content": "This is a question with an image <img>."},
],
[PIL.Image(), PIL.Image()]
),
(
[
{"role": "user", "content": "This is a question without images."},
{"role": "assistant", "content": "This is the answer."},
{"role": "user", "content": "This is a question without images."},
],
[]
),
]
Here is an example of a batcher
without any logic:
def batcher(prompts) -> list[str]:
return ["Answer" for _ in prompts]
A function is the simplest example of a Callable but the batcher can also be implemented as a Callable class (i.e. a class implementing the __call__
method). Doing it this way allows to initialize the model in the __init__
function of the class. We give an example for the Mistral model (a language-only model).
class batcherMistral:
def __init__(self) -> None:
self.model: MistralModel = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
self.tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
self.tokenizer.pad_token = self.tokenizer.eos_token
def __call__(self, prompts):
model_inputs = [self.tokenizer.apply_chat_template(messages[0], return_tensors="pt", tokenize=False) for messages in prompts]
model_inputs = self.tokenizer(model_inputs, padding="max_length", truncation=True, max_length=1024, return_tensors="pt")
generated_ids = self.model.generate(**model_inputs, max_new_tokens=200, do_sample=True, pad_token_id=self.tokenizer.pad_token_id)
# Remove the first 1024 tokens (prompt)
generated_ids = generated_ids[:, model_inputs["input_ids"].shape[1] :]
answers = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
return answers
To run the benchmark, call the eval
method of the MultiMedEval
class with the list of tasks to benchmark on, the batcher to ealuate and the evaluation parameters. If the list is empty, all the tasks will be benchmarked.
evalParams = EvalParams(batch_size=128)
results = engine.eval(["MedQA", "VQA-RAD"], batcher, evalParams=evalParams)
The SetupParams
class takes a path for each dataset:
load_dataset
as cache_dirload_dataset
as cache_dirload_dataset
as cache_dirload_dataset
as cache_dirload_dataset
as cache_dirload_dataset
as cache_dirThe EvalParams
class takes the following arguments:
To add a new task to the list of already implemented ones, create a folder named MultiMedEvalAdditionalDatasets
and a subfolder with the name of your dataset.
Inside your dataset folder, create a json
file that follows the following template for a VQA dataset:
{
"taskType": "VQA",
"modality": "Radiology",
"samples": [
{
"question": "Question 1",
"answer": "Answer 1",
"images": ["image1.png", "image2.png"]
},
{ "question": "Question 2", "answer": "Answer 2", "images": ["image1.png"] }
]
}
And for a QA dataset:
{
"taskType": "QA",
"modality": "Pathology",
"samples": [
{
"question": "Question 1",
"answer": "Answer 1",
"options": ["Option 1", "Option 2"],
"images": ["image1.png", "image2.png"]
},
{
"question": "Question 2",
"answer": "Answer 2",
"options": ["Option 1", "Option 2"],
"images": ["image1.png"]
}
]
}
Note that in both cases the images
key is optional. If the taskType
is VQA, the metrics computed will be BLEU-1, accuracy for closed and open questions, recall and recall for open questions as well as F1. For the QA taskType
, the tool will report the accuracy (by comparing the answer to every option using BLEU).
@misc{royer2024multimedeval,
title={MultiMedEval: A Benchmark and a Toolkit for Evaluating Medical Vision-Language Models},
author={Corentin Royer and Bjoern Menze and Anjany Sekuboyina},
year={2024},
eprint={2402.09262},
archivePrefix={arXiv},
primaryClass={cs.CV}
}