Yufang-Liu / clip_hallucination

[EMNLP 2024] Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
5 stars 0 forks source link

script for automatically evaluating clip on OHD benchmark #2

Open nutsintheshell opened 2 weeks ago

nutsintheshell commented 2 weeks ago

Dear authors, I wonder whether a script for automatically evaluating clip models on your benchmark exists. I find the dataset of your benchmark in OHD-Caps dataset. But I don't find the code to evaluate models on it. It seems that the main_aro.py in evaluate_clip has some path problem in evaluating models on OHD.
Therefore, is there a script for this? Thanks

Yufang-Liu commented 2 weeks ago

The file path may need to be modified. The script for evaluating OHD performance is 'main_aro.py'. By passing the '--dataset' argument, COCO_Object, Flickr_Object, and Nocaps_Object refer to the three subsets in OHD.

Yufang-Liu commented 2 weeks ago

If there are still issues after modifying the path, please let me know where the problem is. I am happy to help.

nutsintheshell commented 2 weeks ago

If there are still issues after modifying the path, please let me know where the problem is. I am happy to help.

thanks for your answers. I've deal with the problem. Your work is pretty good.