PRBonn / lidar-bonnetal

Semantic and Instance Segmentation of LiDAR point clouds for autonomous driving
http://semantic-kitti.org
MIT License
945 stars 205 forks source link

Do the iou computations used in the training code match the computations used in SemanticKITTI: Semantic Segmentation? #19

Closed SongyiGao closed 4 years ago

tano297 commented 4 years ago

Hi. Yes, it is the same code, but when the training happens the calculation happens only for the points included in the range image. During the final evaluation in the server, ALL points are evaluated, even those that were not represented. You can look at the paper to check how we deal to obtain labels for these points.

In general, you can expect a lower IoU when evaluating in the point clouds vs the value you obtain for the range image

SongyiGao commented 4 years ago

Hi. Yes, it is the same code, but when the training happens the calculation happens only for the points included in the range image. During the final evaluation in the server, ALL points are evaluated, even those that were not represented. You can look at the paper to check how we deal to obtain labels for these points.

In general, you can expect a lower IoU when evaluating in the point clouds vs the value you obtain for the range image

Thank you. I get it. But now I have another confusion.I use the labels as the output to test the function of caculate IoU. As fllows:

evaluator.addBatch(proj_labels, proj_labels)

Accuracy output is 1. But the output of the IOU is only 0.6 to 0.8. Why not one?

Lr: 0.000e+00 | Update: 4.707e-03 mean,2.227e-02 std | Epoch: [0][0/9565] | Time 4.884 (4.884) | Data 0.312 (0.312) | Loss 3.1637 (3.1637) | acc 1.000 (1.000) | IoU 0.632 (0.632) |
Lr: 1.045e-05 | Update: 2.023e-02 mean,3.255e-02 std | Epoch: [0][10/9565] | Time 0.771 (1.148) | Data 0.080 (0.106) | Loss 3.2776 (3.2504) | acc 1.000 (1.000) | IoU 0.789 (0.670) |
Lr: 2.091e-05 | Update: 1.247e-02 mean,2.101e-02 std | Epoch: [0][20/9565] | Time 0.770 (0.978) | Data 0.081 (0.096) | Loss 3.2103 (3.2537) | acc 1.000 (1.000) | IoU 0.789 (0.662) |
tano297 commented 4 years ago

Hi, This is because not all labels are present in one batch. If you accumulate over the entire dataset instead, you will get 1. This is how it is done for the entire validation set, and this is why the evaluator allows to add batches, but calculate the statistics of the accumulation as a whole. As to why the accuracy is 1, is because the accuracy only considers true postives (which is all for the ground truth) vs all for the whole confusion matrix, making no distinctions about classes, and the iou calculates each class independently and then averages (also averaging the ones that are zero due to missing labels)

tano297 commented 4 years ago

I will close this issue now, since it is not an issue, but feel free to keep commenting