Closed iegorval closed 1 month ago
Hi, sure, the two parameters accomplish the following:
So if you have a point cloud of say 1M points, you may need a higher seed_k and most probably will need a higher seed_k_alpha. Instead of making the num_patches and patch_step depend on seed_k and seed_k_alpha (and also N), you can hard the values if you know your specific use case and dataset (given your GPU resources).
Hope that helps!
Hello again, thanks for fast answer as always!
- During training, patch size is set to 1000 points and that's the patch size we want during inference as well. So we want to make sure that we have enough 1000 point patches that cover the entire point cloud of interest. Smaller values on large point clouds can lead to insufficient coverage of all points. Num_patches also depends on the number of points N to make sure we do build enough patches but seed_k is there to ensure that we cover the point set well enough.
Oh, so the patch size parameter should not really be changed at inference then, unless we retrain, and instead seed_k
is something to experiment with?
And as for seed_k_alpha
, this is similar to the intuition I had, but just to make sure: it is there indeed purely for VRAM considerations (that is indeed a concern for me :)), and not expected to actually change the resulting point cloud, unlike seed_k
?
Also, when I split the point cloud into sub-regions and run the method on those sub-regions, the method tends to push the points not only to underlying to clean surfaces, but also slightly towards the center of the sub-region. In the result this creates noticeable boundaries between the sub-regions. Do you know if that is expected behavior (that the points are also pushed slightly towards the center), or is it sth from my side, or you have no experience with it?
Hi, all good. Glad to help!
So patch size can be changed in some instances but you need to verify that it doesn't lead to extreme aberrations of the denoised outputs. Basically, it's trial and error. But the number of edge connections (k=32) should not be changed as that affects the network (graph signals are based on the k parameter, while the patch size affects the overall size of the graph but not each local region within a graph).
Yes, seed_k_alpha
is only used for resource considerations unlike the choice of seed_k
, which impacts the denoised result as the number of sampled input patches changes based on the latter.
As for the last question, I have seen this behaviour myself. It's an artifact of the class of loss functions we use for the denoising process (these L_2 norm loss functions seem to make the denoised result contract a bit). This was observed in the PointCleanNet paper and they addressed it using a Taubin-smoothing like function to adjust the denoising displacements and inflating the denoised point cloud. I have an implementation below, for an older paper I wrote: https://github.com/ddsediri/CLJNEPCF/blob/63f75c53e2265b3136e6308e18312db54c9627d6/Inference.py#L27
Have a look and see if using this helps with the contraction. Other than that, I don't know of any methods that can be readily applied. In the IterativePFN paper, we proposed sampling patches so that they overlap, and then picking the best filtered points amongst the overlapping regions. When you partition point clouds with defined boundaries into sub-regions, there is no overlap and this causes contraction of the regions (and splitting along the partition lines). Maybe creating a large scale patch stitching method could rectify this.
Thanks for all your comments, it was very helpful! :)
Hello again,
Maybe you could clear a bit the meaning of
seed_k
andseed_k_alpha
parameters and share some intuition behind tuning those for some specific dataset?Thanks!