Hi! BIchen, thanks for sharing your work, I'm wondering if the evaluation tool you use in the repo is the latest eval tool of kitti as there is an update on 2017/04/25, according to the official site:
All methods are ranked based on the moderately difficult results. Note that for the hard evaluation ~2 % of the provided bounding boxes have not been recognized by humans, thereby upper bounding recall at 98 %. Hence, the hard evaluation is only given for reference.
Note 1: On 25.04.2017, we have fixed a bug in the object detection evaluation script. As of now, the submitted detections are filtered based on the min. bounding box height for the respective category which we have been done before only for the ground truth detections, thus leading to false positives for the category "Easy" when bounding boxes of height 25-39 Px were submitted (and to false positives for all categories if bounding boxes smaller than 25 Px were submitted).
Hi! BIchen, thanks for sharing your work, I'm wondering if the evaluation tool you use in the repo is the latest eval tool of kitti as there is an update on 2017/04/25, according to the official site:
kitti