Closed abhi1kumar closed 2 years ago
My apologies! I had a bug in my code, which lead to those erroneous results. The erroneous results did not come because of the waymo_eval.py.
I have fixed it and the evaluation results are now as expected.
@abhi1kumar Hello, are you using a monocular model? The performance is so high, which is a little bit strange.
Are you using a monocular model? The performance is so high, which is a little bit strange.
Yes, that is why I was worried as well. Later I found that my code had a bug. I fixed it and the evaluation results were then as expected.
Hi PCT authors, I am using your waymo_eval.py for evaluating my Waymo model. Here is the output
You should quickly notice that the AP for all Level 1 Vehicle = 0.34 is the same as the AP for [0,30) Level 1 Vehicle = 0.34. This strange behavior also shows up for the Level 1 Vehicle APH and other Level 1 classes (which I have not shown here). Generally, the AP for all Level 1 Vehicle is less than the AP for [0,30) Level 1 Vehicle as correctly reported in Table 7 of your paper.
I am unable to understand this behavior and so wanted to ask if you saw similar stuff on your end.
PS- Level 2 metrics do NOT show this behavior. e.g., in the above output, AP for all Level 2 objects (0.02), is less than AP for [0,30) Level 2 objects (0.04) as expected.
I am using anaconda and following are the packages in my conda environment: