Closed huixiancheng closed 3 years ago
Sorry for the delay:
The configuration files are mainly included showing how we evaluate on the server the single and multiple scan experiments. Therefore, we have to account for classes that are merged or ignored in the evaluation.
Hope that answers your questions. Feel free to comment here if you still have doubts.
ok,sir!thanks for you reply
How visualize the point cloud in the server which are not a monitor?
Many thanks to your team for the open source code and toolbox,now I've tried to reproduce awesome early work with lidar-bonnetal. And I have two ques about the visualize.py
first is about the marked part Is this a part of #57 ? The label of those points are "0" and mean "unlabeled" since there are 50m away.
second is about the config there two semantickitti config. one is 20 classes .one is 26 classes I think the visualize.py don't use label mapping the code and the result prove that https://github.com/PRBonn/semantic-kitti-api/blob/c2d7712964a9541ed31900c925bf5971be2107c2/auxiliary/laserscan.py#L246-L247 https://github.com/PRBonn/semantic-kitti-api/blob/c2d7712964a9541ed31900c925bf5971be2107c2/auxiliary/laserscan.py#L263
the rgb of the cyan part should be (0,0,0) since in semantic-kitti.yaml. (50,255,255)👉(0,0,0) https://github.com/PRBonn/semantic-kitti-api/blob/c2d7712964a9541ed31900c925bf5971be2107c2/config/semantic-kitti.yaml#L135
so i think the only useage of the config in the visualize.py is to provide the original 34 classes color_map. https://github.com/PRBonn/semantic-kitti-api/blob/c2d7712964a9541ed31900c925bf5971be2107c2/visualize.py#L18-L24 I think it will lead to some misunderstandings. Because the visualize.py show it's cyan but show scans during training and result of the predictions show it's black.
At first I thought I had made a mistake in modifying the code since i am a noob :-)