SAIS-FUXI / EvalAlign

Apache License 2.0
13 stars 0 forks source link

Meaning of the annotations #4

Open wzczc opened 1 week ago

wzczc commented 1 week ago

image

Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?

SAIS-FUXI commented 1 week ago

image

Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?

I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.

wzczc commented 1 week ago

image Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?

I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.

Got it, thanks for your reply. I would also like to ask if there is a specific document that details which images are included in the 34% that contain human in the faithfulness evaluation?

SAIS-FUXI commented 1 week ago

image Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?

I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.

Got it, thanks for your reply. I would also like to ask if there is a specific document that details which images are included in the 34% that contain human in the faithfulness evaluation?

You can refer to our article (https://arxiv.org/abs/2406.16562) and open source datasets(https://huggingface.co/datasets/Fudan-FUXI/EvalAlign-datasets), where the images come from different text generated image models

wzczc commented 1 week ago

image Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?

I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.

Got it, thanks for your reply. I would also like to ask if there is a specific document that details which images are included in the 34% that contain human in the faithfulness evaluation?

You can refer to our article (https://arxiv.org/abs/2406.16562) and open source datasets(https://huggingface.co/datasets/Fudan-FUXI/EvalAlign-datasets), where the images come from different text generated image models

Yes I've downloaded the dataset, but I want to select the 34% images which belongs to “human". Is there any file indicating which images belong to "human," or do I need to select them one by one myself?