QingyongHu / RandLA-Net

🔥RandLA-Net in Tensorflow (CVPR 2020, Oral & IEEE TPAMI 2021)
Other
1.3k stars 321 forks source link

removing grid_subsampling #230

Closed mailtohrishi closed 2 years ago

mailtohrishi commented 2 years ago

Hi... in the preprocessing/data preparation stage, the point cloud is being subsampled with very close points within an <x,y> finer grid being collapsed into a single point (barycenter or voxel center). I could see it happening step-by-step, by putting a lot of print statements and recompiling the so files. But the basic doubt of mine is, why is grid subsampling needed in first place? There is zero mention of it in the paper, and it takes a lot of preparation time. So if someone can tell that it is indeed needed, then its okay, else can we drop subsampling step?

QingyongHu commented 2 years ago

Thanks for your interest in our work!

To clarify, we follow KPConv to do the grid-subsampling at the beginning. The main reasons are: 1) For particularly large datasets such as Semantic3D (4 billion points in total), grid subsampling preprocessing can effectively reduce the total number of points (i.e., reducing redundancy), hence saving memory and computational cost. 2) As described in the KPConv paper, grid subsampling helps deal with varying densities. 3) Finally, grid subsampling at the beginning helps reduce point density, which further helps enlarge the receptive field in our RandLA-Net, since KNN is used in the neighboring search step.

Finally, we would like to clarify that grid subsampling only needs to be performed once at the beginning (sorry for too much information printed) in our implementation. Also, the grid subsampling step is not mandatory, you are free to remove this preprocessing, especially when the customized point clouds are sparse and insufficient.

I hope this could answer your question!

mailtohrishi commented 2 years ago

Thank you so much! It confirms my hunch, and also it is good to learn that it arose from KPConv.