Open mmoghadam11 opened 3 years ago
Hey, were you able to get the mAP without pushing to the server?
@mmoghadam11 Can you share me more details? So that I can help you.
I guess this phenomenon might attribute to the different evaluation metric of DOTA and COCO? Since in COCO metric, the accuracy of each class is evaluated with mAP(iou_thr = [0.5, 0.05, 0.95]), however in DOTA metric, the accuracy of each class is evaluated with AP50(iou_thr = 0.5).
I agree, I have the same issue. The results you have in the paper are not reproducible. Please update the pretrained models or provide more details on what the exact commands you used were to get those numbers
i upload the nms resaults and get this:
mAP: 0.3745174540144275 ap of each class: plane:0.5300541025955471, baseball-diamond:0.27181818181818185, bridge:0.43130126717635453, ground-track-field:0.14171122994652408, small-vehicle:0.24575509050331162, large-vehicle:0.25701370866991136, ship:0.7015457849808598, tennis-court:0.5453346126169345, basketball-court:0.3152847152847153, storage-tank:0.09090909090909091, soccer-ball-field:0.18932806324110674, roundabout:0.1655011655011655, harbor:0.6844767188033766, swimming-pool:0.6143607552420431, helicopter:0.4333673229272884
why its not equal your mAP that told in the paper(thats in order 60-70 in pdf)??? and why its too low??? how can i get more detail( something like pr & recall plot)??? where i can get more .pth files??? i tried to train the other configs but it takes too more time and colab put me out ): could you give me your checkpoints???pls