Closed GG-Bonds closed 5 days ago
Due to limited GPU memory, we did not use a larger point_nsample to train the model. However, we experimented with point_nsample=256, and the results were not much different from point_nsample=512. I think the reason behind this is that during the training process, all points have a chance to be randomly sampled and seen by the network.
I observed point_nsample==512 in the code. I would like to ask the author whether he has done ablation experiments on point_nsample. According to my understanding, the larger the point_nsample, the more supervision signals, and the effect should be better.