mcahny / object_localization_network

Learning Open-World Object Proposals without Learning to Classify
Apache License 2.0
193 stars 26 forks source link

How to make "AR@k evaluation does not count those proposals on the 'seen' classes into the budget (k)" ? #15

Closed xishanhan closed 1 year ago

xishanhan commented 1 year ago

Hi, thank you for your excellent work. I used your model to eval on the coco_val2017_unseen_classes under your framework, and it works well. However, when I convert the result into a format that can be eval by detectron2 and test it on the coco_val2017_unseen_classes under the detectron2 framework , the performance becomes poor. So, i'd love to know how to make what you said "AR@k evaluation does not count those proposals on the 'seen' classes into the budget (k)" ? Looking forward to your reply.

xishanhan commented 1 year ago

Oh, I found mmdet/datasets/coco_split.py and know the answer. So, if I have separated the annotations json of seen and unseen categories, I don’t need to use this ‘CocoSplitDataset’, but just change mmdet/datasets/coco.py directly, right?

xishanhan commented 1 year ago

I found that I made a mistake in the order of the images corresponding to the bbox and score in the result, it should be the coco annotation ['images'] sequence. After I modified it, the performance under detectron2 is the same as mmdet.

YH-2023 commented 5 months ago

Hi, thank you for your excellent work. I used your model to eval on the coco_val2017_unseen_classes under your framework, and it works well. However, when I convert the result into a format that can be eval by detectron2 and test it on the coco_val2017_unseen_classes under the detectron2 framework , the performance becomes poor. So, i'd love to know how to make what you said "AR@k evaluation does not count those proposals on the 'seen' classes into the budget (k)" ? Looking forward to your reply.

"May I ask how you manage to convert the results into a format that can be evaluated by Detectron2? MMDetection does not have the U-Recall metric, as well as the previous mAP and current mAP metrics. How can I align them with the evaluation metrics on Detectron2?"