AILab-CVC / SEED-Bench

(CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.
Other
310 stars 11 forks source link

a lot of data with more questions than pictures in SEED-Bench-2 level L2, is this reasonable? #15

Open nemonameless opened 9 months ago

nemonameless commented 9 months ago

SEED-Bench_v2_level1_2_3.json example:

        {
            "answer": "B",
            "choice_a": "The man and woman in the image are both looking away from the camera.",
            "choice_b": "The woman's hair is black.",
            "choice_c": "The woman's dog is on the couch next to her in the image.",
            "choice_d": "There are two people in the image.",
            "data_id": [
                "task23/ICL_images/in_context_attribute_2/1.jpg",
                "task23/ICL_images/in_context_attribute_2/2.jpg",
                "task23/ICL_images/in_context_attribute_2/3.jpg"
            ],
            "data_source": "SEED-Bench v2",
            "data_type": "Interleaved Image",
            "level": "L2",
            "question": "<img>: The predominant color of the uniforms worn by the players is blue. <img>: The most notable color present in the woman's outfit is orange. <img>:",
            "question_id": "23_0",
            "question_type_id": 23,
            "subpart": "Interleaved Image & Text Comprehension",
            "version": "v2"
        },

there are 360 questions end with this style <img>:". Did you put the wrong data?

Bohao-Lee commented 9 months ago

Thank you very much for your interest in our work. And I have made the necessary modifications. Currently, there are 509 occurrences of the "\<img>" character, which indicates the position of the corresponding images in the $L_2$ problems. For In-Context Captioning, there are a total of 120 problems with 3 images per problem, resulting in 360 images. Meanwhile, Interleaved Image-Text Analysis consists of 49 problems with a total of 149 images.

nemonameless commented 9 months ago

Just fixed [img] -> <img>?but have not solved the question I asked.

 "question": "<img>: The predominant color of the uniforms worn by the players is blue. <img>: The most notable color present in the woman's outfit is orange. <img>:",

still no question after the last <img> Please fix it. Thanks.

Bohao-Lee commented 9 months ago

In fact, this format is specifically designed by us to address such issues. As mentioned in our paper, "Part-2 evaluates MLLMs' comprehension of arbitrary interleaved image-text inputs, including In-Context Captioning. In this task, two examples of image-caption pairs along with an image are provided, and the model is expected to describe the specific aspect of the image." For more details, please refer to Section 3.2.2 of our paper on SEED-Bench-2.

nemonameless commented 9 months ago

Could you please give a prompt or question? We just want to add a question after the last <img> for our model testing. For example: please select the description below that best describes the last image. which of the following options provides the same type of description for the last picture? ...

geyuying commented 9 months ago

Hi, following the few-shot setting of Flamingo[1], we do not provide a specific prompt for evaluating in-context captioning. Sine we adopt PPL as the evaluation metric, it may not be necessary to add a question for model testing.

[1] Flamingo: a Visual Language Model for Few-Shot Learning