amazon-science / progressive-coordinate-transforms

Progressive Coordinate Transforms for Monocular 3D Object Detection, NeurIPS 2021
Apache License 2.0
64 stars 10 forks source link

Waymo evaluation: Metrics of all Level 1 Objects same as Metrics of [0, 30) Level 1 Objects #18

Closed abhi1kumar closed 2 years ago

abhi1kumar commented 2 years ago

Hi PCT authors, I am using your waymo_eval.py for evaluating my Waymo model. Here is the output

OBJECT_TYPE_TYPE_VEHICLE_LEVEL_1/AP: 0.34
OBJECT_TYPE_TYPE_VEHICLE_LEVEL_1/APH: 0.33
OBJECT_TYPE_TYPE_VEHICLE_LEVEL_2/AP: 0.02
OBJECT_TYPE_TYPE_VEHICLE_LEVEL_2/APH: 0.02
RANGE_TYPE_VEHICLE_[0, 30)_LEVEL_1/AP: 0.34
RANGE_TYPE_VEHICLE_[0, 30)_LEVEL_1/APH: 0.33
RANGE_TYPE_VEHICLE_[0, 30)_LEVEL_2/AP: 0.04
RANGE_TYPE_VEHICLE_[0, 30)_LEVEL_2/APH: 0.04
RANGE_TYPE_VEHICLE_[30, 50)_LEVEL_1/AP: 0.12
RANGE_TYPE_VEHICLE_[30, 50)_LEVEL_1/APH: 0.12
RANGE_TYPE_VEHICLE_[30, 50)_LEVEL_2/AP: 0.00
RANGE_TYPE_VEHICLE_[30, 50)_LEVEL_2/APH: 0.00
RANGE_TYPE_VEHICLE_[50, +inf)_LEVEL_1/AP: 0.05
RANGE_TYPE_VEHICLE_[50, +inf)_LEVEL_1/APH: 0.05
RANGE_TYPE_VEHICLE_[50, +inf)_LEVEL_2/AP: 0.00
RANGE_TYPE_VEHICLE_[50, +inf)_LEVEL_2/APH: 0.00

You should quickly notice that the AP for all Level 1 Vehicle = 0.34 is the same as the AP for [0,30) Level 1 Vehicle = 0.34. This strange behavior also shows up for the Level 1 Vehicle APH and other Level 1 classes (which I have not shown here). Generally, the AP for all Level 1 Vehicle is less than the AP for [0,30) Level 1 Vehicle as correctly reported in Table 7 of your paper.

I am unable to understand this behavior and so wanted to ask if you saw similar stuff on your end.

PS- Level 2 metrics do NOT show this behavior. e.g., in the above output, AP for all Level 2 objects (0.02), is less than AP for [0,30) Level 2 objects (0.04) as expected.

I am using anaconda and following are the packages in my conda environment:

blas                      1.0                         mkl    anaconda
cudatoolkit               10.1.243             h6bb024c_0    anaconda
cudnn                     7.6.5                cuda10.1_0    anaconda
google-auth               1.22.1                     py_0    anaconda
google-auth-oauthlib      0.4.1                      py_2    anaconda
google-pasta              0.2.0                      py_0    anaconda
protobuf                  3.13.0.1         py36he6710b0_1    anaconda
py-opencv                 3.4.2            py36hb342d67_1
python                    3.6.13               h12debd9_1  
tensorboard               2.2.1              pyh532a8cf_0    anaconda
tensorflow                2.1.0           gpu_py36h2e5cdaa_0    anaconda
tensorflow-gpu            2.1.0                h0d30ee6_0    anaconda
abhi1kumar commented 2 years ago

My apologies! I had a bug in my code, which lead to those erroneous results. The erroneous results did not come because of the waymo_eval.py.

I have fixed it and the evaluation results are now as expected.

Cc-Hy commented 2 years ago

@abhi1kumar Hello, are you using a monocular model? The performance is so high, which is a little bit strange.

abhi1kumar commented 2 years ago

Are you using a monocular model? The performance is so high, which is a little bit strange.

Yes, that is why I was worried as well. Later I found that my code had a bug. I fixed it and the evaluation results were then as expected.