Open faportillo opened 6 years ago
Hello, I was testing out the evaluation code using the same directory for both label_path and result_path. I should be expecting 100.0 for all metrics as I am using the same annotations. Instead, I get:
Car coco AP@0.50:0.05:0.95: bbox AP:100.00, 100.00, 100.00 bev AP:0.12, 0.16, 0.25 3d AP:0.12, 0.16, 0.25 aos AP:100.00, 100.00, 100.00
Any suggestions as to why this is the case?
@abbyxxn I got the same result as yours. Have you figured out the reason?
@faportillo @abbyxxn @traveller59 I figured out the reason.
176 return abab >= abap and abap >= 0 and adad >= adap and adap >= 0
in rotate_iou.py line 176 should be changed to
176 eps=0.0001
177 return abab >= abap-eps and abap >= 0-eps and adad >= adap-eps and adap >= 0-eps
Because with eps the float equal is not righ.
eps = 0.0001 result=abab >= abap-eps and abap >= 0-eps and adad >= adap-eps and adap >= 0-eps
return result
i also get Car AP(Average Precision)@0.70, 0.70, 0.70: bbox AP:100.00, 100.00, 100.00 bev AP:5.91, 5.34, 6.99 3d AP:5.91, 5.34, 6.99 aos AP:100.00, 100.00, 100.00 Car AP(Average Precision)@0.70, 0.50, 0.50: bbox AP:100.00, 100.00, 100.00 bev AP:5.91, 5.34, 6.99 3d AP:5.91, 5.34, 6.99 aos AP:100.00, 100.00, 100.00 @leon-liangwu
because of the gt and detection yaw angle shoud at least 0.1% error。after generate gt_yaw_samll_change i got: Car AP(Average Precision)@0.70, 0.70, 0.70: bbox AP:100.00, 100.00, 100.00 bev AP:100.00, 100.00, 100.00 3d AP:100.00, 100.00, 100.00 aos AP:100.00, 100.00, 100.00 Car AP(Average Precision)@0.70, 0.50, 0.50: bbox AP:100.00, 100.00, 100.00 bev AP:100.00, 100.00, 100.00 3d AP:100.00, 100.00, 100.00 aos AP:100.00, 100.00, 100.00 @erickwan @@leon-liangwu
What is the difference between COCO's AP (at iou = 0.5) and Kitti's 2D AP when I set iou to 0.5? Why do I always get very different values?
because of the gt and detection yaw angle shoud at least 0.1% error。after generate gt_yaw_samll_change i got: Car AP(Average Precision)@0.70, 0.70, 0.70: bbox AP:100.00, 100.00, 100.00 bev AP:100.00, 100.00, 100.00 3d AP:100.00, 100.00, 100.00 aos AP:100.00, 100.00, 100.00 Car AP(Average Precision)@0.70, 0.50, 0.50: bbox AP:100.00, 100.00, 100.00 bev AP:100.00, 100.00, 100.00 3d AP:100.00, 100.00, 100.00 aos AP:100.00, 100.00, 100.00 @erickwan @@leon-liangwu
@York1996OutLook can you explain in detail what exactly you did to achieve this result
@ @ @anshulpaigwar for instance ,if the gt ry=3.01 ,you should add a small number to it,let ry==3.01001......and do this to all label. if you dont understand,you can add my qq 603997262
Hello, I was testing out the evaluation code using the same directory for both label_path and result_path. I should be expecting 100.0 for all metrics as I am using the same annotations. Instead, I get:
Car AP@0.70, 0.70, 0.70: bbox AP: 100.00, 100.00, 100.00 bev AP: 0.50, 0.47, 0.47 3d AP:0.50, 0.47, 0.47 aos AP: 100.00, 100.00, 100.00 Car AP@0.70, 0.50, 0.50: bbox AP: 100.00, 100.00, 100.00 bev AP: 0.50, 0.47, 0.47 3d AP:0.50, 0.47, 0.47 aos AP: 100.00, 100.00, 100.00
Any suggestions as to why this is the case?