cguindel / eval_kitti

Tools to evaluate object detection results using the KITTI dataset.
57 stars 23 forks source link

I use the annotation file as the result file for the experiment #20

Open Sunny-Sun-Zhe opened 3 years ago

Sunny-Sun-Zhe commented 3 years ago

I used the annotation file as the result file for evaluation, but the evaluation result is very low. Are these reasonable?

cguindel commented 3 years ago

Hi @Sunny-Sun-Zhe. I think I will need more information. If I assign score 1 to every ground-truth annotation and evaluate them across the whole training set, I obtain 100% AP for every category and level of difficulty, except for Pedestrian_sitting / Easy, which stays at 95%; but this is a glitch caused by the fact that there are fewer valid samples (39) than recall levels (41).

Sunny-Sun-Zhe commented 3 years ago

嗨@Sunny-Sun-Zhe。我想我需要更多信息。如果我为每个真实标注分配 1 分,并在整个训练集上对其进行评估,那么对于每个类别和难度级别,我都会获得 100% 的 AP,除了 Pedestrian_sitting/Easy,它保持在 95%;但这是由有效样本(39)少于召回水平(41)这一事实引起的故障。

In my follow-up experiment, I added a random score to each recognition result. I found that the evaluation result is different each time. I think this should be due to the impact of the score, not a program problem.