Hi,
I know it has been quite some time since you published the code but it is also a warning for people who wants to submit to the SemanticKITTI 4D Panoptic Segmentation benchmark. If there is even 1 point that predicts "ignore", i.e. a semantic label of 0, then the evaluation script divides the IoU to 20 instead of 19 when calculating the mIoU. If you want to get correct results in the submission benchmark, you should not predict any ignore labels.
https://github.com/MehmetAygun/4D-PLS/blob/1d029ae10f71a4b635fe6976174707986331e9d5/utils/eval_np.py#L259
Hi, I know it has been quite some time since you published the code but it is also a warning for people who wants to submit to the SemanticKITTI 4D Panoptic Segmentation benchmark. If there is even 1 point that predicts "ignore", i.e. a semantic label of 0, then the evaluation script divides the IoU to 20 instead of 19 when calculating the mIoU. If you want to get correct results in the submission benchmark, you should not predict any ignore labels. https://github.com/MehmetAygun/4D-PLS/blob/1d029ae10f71a4b635fe6976174707986331e9d5/utils/eval_np.py#L259