Closed charlesCXK closed 4 years ago
That's an interesting idea, but we don't consider this and with real data you only get a LiDAR scan (or maybe a history of past scans). One simply has no occlusion data for unobservable parts. You might only know that you have not seen something and you want to predict what might happen from your experience. Therefore we don't provide occlusion labels for the test data.
That's an interesting idea, but we don't consider this and with real data you only get a LiDAR scan (or maybe a history of past scans). One simply has no occlusion data for unobservable parts. You might only know that you have not seen something and you want to predict what might happen from your experience. Therefore we don't provide occlusion labels for the test data.
Thanks for your quick reply! I got it.
Hi,
Thank you for sharing the work. I have a doubt corresponding to the occluded voxels file. In the description of the dataset, you state:
"a file XXXXXX.occluded in a packed binary format that contains for each voxel a flag that specifies if this voxel is either occupied by LiDAR measurements or occluded by a voxel in line of sight of all poses used to generate the completed scene."
However, when I plot the voxels with label == 1 for the occluded grid. The occluded voxels seems to be occluded from the line of sight of only one pose, which would be the one that corresponds to the pose of the scan of the file (XXXXXX.bin in this case).
I'm I wrong ? Thanks and have a nice day!
I was comparing the occluded grid with the groundtruth grid, and actually, occluded voxels seems to be the ones that are occluded if the groundtruth is seen from the current scan pose. You can see this on the image below.
Is this the case ? Thanks again.
i have to look up the exact code for generation, but as far as I remember are these indeed occlusions from all positions. For the motorcyclist on the street it looks like it was only seen from the first position, since it drives in front of the car.
Thanks for the quick reply,
I think it can't be occlusions from all positions, since if this is the case, then the voxels that are after the vehicles close to the sensor pose should not be occluded because they have been seen in further sensor poses. No ?
I think it's the union of the occlusions not the intersection, but as i said before, I have to check in the code to generate the grids.
I'm not the expert for SSC and @mgarbade is here probably the right person to comment on this.
Sorry, just saw this thread, will try to comment sequentially
Hi, Thank you for sharing such an excellent work! I have a question about the data. In the test set, only .bin files are supplied, however, in my option, could .occluded files also be provided? This is because the visible atmosphere should be known when we get the observed surface, and we only need to conduct semantic segmentation and completion on the visible surface and the occluded region. Thus if we know where is the area of the visible atmosphere, some specially designed methods may reduce the computational cost in this area, where is no need to compute. Hoping for your reply!
This is the case for semantic scene completion in the paper of Song et al.. In their scenario (indoor + kinect) the space before the visible surface, can safely be assumed to be empty. This is not the case for semantic-kitti! As the laser has to look much further ahead, one cannot assume the space to be empty inbetween two laser rays with a low angular resolution. Therefore there is no such thing as 'free empty space' in semantic-kitti-completion
However, when I plot the voxels with label == 1 for the occluded grid. The occluded voxels seems to be occluded from the line of sight of only one pose, which would be the one that corresponds to the pose of the scan of the file (XXXXXX.bin in this case).
That sounds correct.
only invalid voxels are ignored during evaluation
@mgarbade Thanks for clarifying..!
What would be the purpose of these occluded voxels ?
Following the work of Song et al. the occluded voxels are used for subsampling the space to predict. They use 2*N randomly sampled voxels from the empty occluded space in order to address the class imbalance between empty and occupied voxels in the target. N is the number of occupied voxels in the target.
Hi, Thank you for sharing such an excellent work! I have a question about the data. In the test set, only .bin files are supplied, however, in my option, could .occluded files also be provided? This is because the visible atmosphere should be known when we get the observed surface, and we only need to conduct semantic segmentation and completion on the visible surface and the occluded region. Thus if we know where is the area of the visible atmosphere, some specially designed methods may reduce the computational cost in this area, where is no need to compute. Hoping for your reply!