Closed Mathanraj-Sharma closed 3 years ago
Hey @Mathanraj-Sharma,
Sorry for the late reply. Depth, normal and intensity data are correct. The semantic labels seem not correct.
You could check the utils for the mapping between colors and semantic classes and check whether the predictions are correct.
If you are using the pretrained semantic segmentation model, it may not generalize well to a new LiDAR scanner. You may fine-tune or retrain a semantic segmentation model. For more details about training a new model you could find it in our RangeNet++ repo: https://github.com/PRBonn/lidar-bonnetal.
I hope this helps.
Hello @Chen-Xieyuanli, I wanted to know something. I can see you have provided a way to generate the normal and range data for training overlapnet model. Can you specify a way to get the intensity data in the format that you are using here ??
Hey @ArghyaChatterjee,
The way to generate the intensity data is also provided and shown in the demo1.
Hey @ArghyaChatterjee,
Since there is no further update, I would like to close this issue.
If you have any further questions please feel free to ask me to reopen it!
@Chen-Xieyuanli I am willing to train overlapnet model for indoor lidar data, is it possible to train it without semantic maps?
@Chen-Xieyuanli I am willing to train overlapnet model for indoor lidar data, is it possible to train it without semantic maps?
Yes, you could find the options in the network yaml file. You could train overlapnet with depth and normal only.
Hi, I am referring to Overlapnet for my research, I am using the Oxford Newer College dataset for my study.
The dataset itself comes in rosbag file, I successfully saved the pointcloud msgs into .bin file format same as in the KITTI odometry dataset. Also, I created semantic_probs by inferring using rangenet_lib. I have attached few outputs of demo-1 here.
Could you please tell me how I can verify the correctness of the cues?