Open wzczc opened 1 week ago
Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?
I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.
Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?
I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.
Got it, thanks for your reply. I would also like to ask if there is a specific document that details which images are included in the 34% that contain human in the faithfulness evaluation?
Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?
I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.
Got it, thanks for your reply. I would also like to ask if there is a specific document that details which images are included in the 34% that contain human in the faithfulness evaluation?
You can refer to our article (https://arxiv.org/abs/2406.16562) and open source datasets(https://huggingface.co/datasets/Fudan-FUXI/EvalAlign-datasets), where the images come from different text generated image models
Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?
I'm sorry for the ambiguity here. This result was indeed manually annotated, but in order to adapt to Llava, we constructed the data consistent with the original Llava data. The "gpt" written here is actually manually annotated. We have conducted an evaluation in the paper, and existing large models are unable to evaluate the images generated by the models, so manual annotation is necessary.
Got it, thanks for your reply. I would also like to ask if there is a specific document that details which images are included in the 34% that contain human in the faithfulness evaluation?
You can refer to our article (https://arxiv.org/abs/2406.16562) and open source datasets(https://huggingface.co/datasets/Fudan-FUXI/EvalAlign-datasets), where the images come from different text generated image models
Yes I've downloaded the dataset, but I want to select the 34% images which belongs to “human". Is there any file indicating which images belong to "human," or do I need to select them one by one myself?
Hi, I want to know what "from: "gpt"" means in the annotations. Your paper states that the dataset is human-annotated, but the annotation file here says "from:"gpt""?