Closed YanhaoWu closed 2 years ago
Hi @WYHbxer!
For SemanticKITTI we have used the pre-processing from the PointContrast repository with small changes. Basically, we have used SemanticKITTi scan poses, then it aggregates a series of scans (as in the original pre-processing) and extracts views from these aggregated point clouds. Next we just use the KDTree as in PointContrast to extract the correspondent points.
Thank you very much for your reply! Can you also publish this part of the code? I hope to get your reply! thank you!
Sorry for the late reply. I was trying to find this piece of code to use PointContrast pre-processing for KITTI but unfortunately I don't have it anymore. :/
Sorry for the late reply. I was trying to find this piece of code to use PointContrast pre-processing for KITTI but unfortunately I don't have it anymore. :/
It's okay, thank you :)
Hello, thank you very much for your work! In the comparative experiment, you used Pointcontract to train on the Semantickitti dataset. In Pointcontract, we need to shoot the same scene from different views to generate sample pairs. But in Semantickitti, it seems difficult for us to do such a thing. I would like to ask, how do you process this part of the data? I hope to get your reply! thank you!