Open ahayler opened 1 year ago
I would also be very interested in that.
Thank you all for the interest in our work.
There might have been a misunderstanding in our data format. For the ground truth, we have two files for each frame, XXX.label
and XXX.invalid
. The voxels you show should have been marked as "invalid" from the .invalid
file.
Hi @ahayler , thanks for your interest in our dataset. As for your question, we have carefully double check our SSCBench-KITTI360, and visualize frame 2500 of the 00 sequence you mentioned here, the visualization should be:
The black voxels here are regarded as "invalid". If we only visualize the "xxx.label" file, it will be:
Now you can see the difference. The reason of your problem could be the visulization tool, please use https://github.com/PRBonn/semantic-kitti-api
for visulization.
Hey @Louis-Leee,
Thank you for the answer! I think you might have misunderstood my post. I generated the light green voxels to explain, which of the voxels are problematic. By using your preprocess script, I should consider the .invalid
files already.
Regardless, the same issue I described persists in your visualizations too: Under the street, there are about two layers of empty voxels until the black voxels start below. In addition, you can see that at parts the black voxels never start under the scene. There should be only occupied or invalid (black) voxels under the street level and no empty space!
Let me know if it is still unclear!
Dear all, first of thank you for providing the dataset. I am trying to evaluate a model on your KITTI-360 benchmark, but have run into one issue. In every scene there are a lot of empty (unlabeled) voxels under the scene that are not labeled as invalids. You can see these voxels in green in the above plot for frame 2500 of the 00 sequence, but I have checked and they seem to persist in every scene of the KITTI-360 dataset. The exact definition used is: an empty voxel with z <= 6 and only 0 or 255 voxels under it with respect to the z-axis. Have loaded the data incorrectly? I used your preprocess script with the according .yaml file (both provided in your repository) to generate the 256x256x32 numpy arrays. These voxels pose a significant challenge to methods that are not directly trained on your dataset e.g. methods only using 2D supervision.
Kind regards, Adrian