First of all, let me thank you for the extremely comprehensive dataset you provide with KITTI-360, as well as the documentation and visualization scripts.
Nevertheless, I have a question which I hope someone can provide help with.
Namely, for my research I need the ground truth class information of the points in the raw Velodyne scans. But I noticed that the ground truth is only provided for accumulated point cloud sequences. In these sequences, it also seems that quite a few points are filtered away compared to the raw scans. So even under the assumption the ground truth sequences are ordered, it is difficult or even impossible to match the raw scan with a ground truth point efficiently.
Without ground truth of the raw scans, it is not really possible to train a semantic segmentation network that works on these raw scans. I was wondering whether I am missing anything here. Or am I just out of luck with KITTI-360, and for per-scan labels, I should use something like SemanticKITTI instead?
Hi,
First of all, let me thank you for the extremely comprehensive dataset you provide with KITTI-360, as well as the documentation and visualization scripts.
Nevertheless, I have a question which I hope someone can provide help with. Namely, for my research I need the ground truth class information of the points in the raw Velodyne scans. But I noticed that the ground truth is only provided for accumulated point cloud sequences. In these sequences, it also seems that quite a few points are filtered away compared to the raw scans. So even under the assumption the ground truth sequences are ordered, it is difficult or even impossible to match the raw scan with a ground truth point efficiently.
Without ground truth of the raw scans, it is not really possible to train a semantic segmentation network that works on these raw scans. I was wondering whether I am missing anything here. Or am I just out of luck with KITTI-360, and for per-scan labels, I should use something like SemanticKITTI instead?
Thanks!
Kind regards,
Simon