Closed Ww-Lee closed 3 years ago
Why do you think 0.127 | 0.291 | 0.503 | 0.743 | 0.855 | 0.921
is different from the one in paper? This is a screenshot of the result in our paper:
As far as the curve goes, you need to refer to Figure 6. If you want to reproduce Figure 8, you need to use the evaluation code from https://github.com/zihan-z/vpdet_tmm17.
Why do you think
0.127 | 0.291 | 0.503 | 0.743 | 0.855 | 0.921
is different from the one in paper? This is a screenshot of the result in our paper:
Sorry, it's indeed the same as the numbers in Table 2. But I mean it’s different from the Y-axis numbers in Figure 6. For example, AA with upper bound of 2 degrees in Table 2 is 0.503, but Y-axis value corresponding to X-axis 2 in Figure 6 is about 0.8.
This is because the AA metric is not the curve itself, but the area under the curve of the angle accuracy curves. So it acts like CDF rather than percentile, which is too boring. Please refer to Section 4.1 of the paper for the definition.
I will close this issue. Feel free to reopen if you have other findings.
Hello Zhou, I downloaded the pre-trained models from your Google Drive, and directly use TMM17/checkpoint_latest.pth and function AA() in neurvps/eval.py to evaluate TMM17 (Natural Scene) test images. Howerver, the AA curve I generated is different from the one in paper. Specifically, I get results 0.127 | 0.291 | 0.503 | 0.743 | 0.855 | 0.921 corresponding to different threshold [0.5, 1, 2, 5, 10, 20]. So I guess there is a problem with the calculation of cumulative histogram in function AA(), and then modified it according to my own understanding, and produced pretty similar results.
_(np.argwhere(sortederr <= threshold)[-1][0] + 1)/ len(loader)