xyjbaal / FPCC

MIT License
44 stars 11 forks source link

about input point cloud size #9

Closed waiyc closed 2 years ago

waiyc commented 2 years ago

Hi,

In your paper, In each batch in the training process, input points (N = 4, 096) are randomly sampled from each scene and each point can be sampled only once. Just wanted to check with you, does input point cloud size matter in this case (in terms of prediction accuracy for the segmentation results) ?

For example, if we control the input point cloud size ~ 4096 points and there will be no point cloud loss in the sampling step, can it get higher prediction accuracy in this case?

xyjbaal commented 2 years ago

I think what affects the accuracy should be the number of points per object in each input.

In my experience, FPCC performs well when there are about 30 visible objects and 4096 input points( ~100 points per object ).

So even if the total number of points in your scene is only ~4096, but the number of objects is large (>50), the result would be worse.

Lately, there are many point-wise feature extractors with low computationally. In my new progress, I improved the feature extractor so that the network can accept ~10,000 points, and it works well with about 50 visible objects in the scene. But this seems to be the limit, because calculating the feature distance of point pairs and searching nearby points with KNN require too much memory.

PointGroup(2020cvpr) also performs well, but it has too many parameters and is difficult to compile successfully.

Very happy to discuss this problem with you