Open PointCloudYC opened 2 years ago
Thanks for the interesting question. As we have stated in the paper, AziNorm is highly extensible. Indoor or outdoor scenes are both OK. The limitation of applying AziNorm is whether the data really possess the radial symmetry. LiDAR are usually for outdoor scenes and indoor scenes (like S3DIS) are mainly based on RGB-D camera. Point clouds generated by LiDAR and RGB-D camera both possess the radial symmetry, because both the laser rays of LiDAR and the visible lights of RGB-D camera go through the point source (or the optical center) and are along the radial direction. But for indoor scenes, if the point clouds are captured from different optical centers (the position of RGB-D camera is changed) and merged together, the radial symmetry is ruined and AziNorm may be ineffective.
Sorry for the late reply. Thanks for your insightful comments.
So you have concern that the indoor dataset (S3DIS) might not possess the radial symmetry property as one area point cloud of S3DIS (e..g, Area_1) is collected from different locations and then registered together to form the data of the Area_1. Such a registration operation might break the radial symmetry.
Let me know if my understanding deviates from your thinking.
Thanks for the interesting question. As we have stated in the paper, AziNorm is highly extensible. Indoor or outdoor scenes are both OK. The limitation of applying AziNorm is whether the data really possess the radial symmetry. LiDAR are usually for outdoor scenes and indoor scenes (like S3DIS) are mainly based on RGB-D camera. Point clouds generated by LiDAR and RGB-D camera both possess the radial symmetry, because both the laser rays of LiDAR and the visible lights of RGB-D camera go through the point source (or the optical center) and are along the radial direction. But for indoor scenes, if the point clouds are captured from different optical centers (the position of RGB-D camera is changed) and merged together, the radial symmetry is ruined and AziNorm may be ineffective.
Sorry for the late reply. Your understanding is correct.
Hi! In the paper, the AziNorm was plugged into the KPConv and tested on the autonomous dataset SemanticKITTI. I was wondering if the method is applicable to the indoor or outdoor datasets (S3DIS or Semantic3D)? Really appreciate it if you would share some experience or insights on that. Thanks.