Closed Hub-Tian closed 3 years ago
Hi Hub, thanks for your appreciation! the possible mismatched distribution is a kind of guess, and it more comes from the much lower APs on test split compared with the ones on val split, which is also stated in the arxiv version of PartA^2. One possible explanation is that the data are sampled from different segments for val and test splits, so the mismatched distribution may come from the difference between segments. We use the same evaluation settings on val split and test split, and the training settings on train and trainval sets are the same, e.g., 60 training epoches. We follow the common splitting strategy on KITTI dataset.
Hi, Vegeta, Thanks for sharing such great work. I found that there are two evaluation result in the test.py. The result are quite different. I wonder the difference of the two evaluation result between the two functions: dataseet.evaluation() and the kitti_evaluate()
Thanks. The two functions present two metrics of AP with 11 and 40 sampling recall points, respectively.
Wonderful work!It's really excited to see such a clear framework that achives SOTA results. As you point out, mismatched distributions exist between the KITTI val and test splits. Could you please add more describtion on the details of evaluation for test set? Such as how the dataset is splited and how many epoches are used?