I have modified the Evaluation codes to Ignores "difficult annotations".
While, Faster-RCNN only got 75 mAP in WR1-OpenSet (voc07 metric), which is 2mAP lower than Table1.
Diff2 shows that this repo using 11 point evaluation metric. Is it voc07 metric? Should we use voc12 metric to do evaluation?
Actually,after evaluating with voc12 metric, Faster-RCNN got 77 mAP in WR1-OpenSet, which is same as Table1.
In Close Set:
Faster-RCNN got 81 mAP with voc07 Metric, 85 mAP with voc12 metric.
So, My Question is:
Results of Table1 uses voc07 metric for Close-set and voc12 metric for WR1-OpenSet ?
Hi @akshay-raj-dhamija
I have noticed the issue #6 which is about Table1 Results in the Paper.
The Main differences are: Diff1. This Repo Do Not ignores difficult annotations in PASCAL VOC Diff2. This Repo uses a 11 point evaluation metric
I have modified the Evaluation codes to Ignores "difficult annotations". While, Faster-RCNN only got 75 mAP in WR1-OpenSet (voc07 metric), which is 2mAP lower than Table1.
Diff2 shows that this repo using 11 point evaluation metric. Is it voc07 metric? Should we use voc12 metric to do evaluation?
Actually,after evaluating with voc12 metric, Faster-RCNN got 77 mAP in WR1-OpenSet, which is same as Table1.
In Close Set: Faster-RCNN got 81 mAP with voc07 Metric, 85 mAP with voc12 metric.
So, My Question is: Results of Table1 uses voc07 metric for Close-set and voc12 metric for WR1-OpenSet ?