WeijingShi / Point-GNN

Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud, CVPR 2020.
MIT License
523 stars 114 forks source link

Predictions created by run.py #23

Closed Divadi closed 3 years ago

Divadi commented 4 years ago

Hi, I was looking at some of the predictions created by run.py, and I noticed that the final value, which should represent "score" were floats well beyond 1.0. For example, here are a couple lines:

Car -1 -1 0 31.25988144905333 184.79949288582046 223.47710897528953 243.8805704841735 1.468605 1.6488608 4.1514063 -12.696779 1.5654854 19.600437 -0.40132397 116.6333316869114 
Car -1 -1 0 505.6536599536762 169.51657457315582 575.6747515915881 208.75500706020716 1.6064568 1.6516528 4.1954045 -2.5457036 1.1153448 31.65602 1.8633238 41.84836808036084 
Car -1 -1 0 323.85103596253697 178.08695449636517 391.9426736615526 207.07340952779643 1.4520879 1.6204964 4.0298176 -12.782756 1.2804985 38.121017 1.770853 8.83173518350632 

Do you know why this might be the case?

WeijingShi commented 4 years ago

Hi, the output score is not normalized. The output score is an aggregated version of the classification scores of multiple overlapped boxes. Therefore, it can go way beyond 1. You can set a maximum value and normalize it if desired. Thanks,

Divadi commented 4 years ago

Ah I see Would you recommend normalizing by the largest value over the entire dataset or the local largest value in the frame? Also, would it affect the evaluation numbers?

WeijingShi commented 4 years ago

I think normalizing over the entire dataset makes more sense. It should give you the same evaluation number as well. Normalizing locally in a frame would cause an inaccurate detection to have a larger score just because it is the only detection in the frame.