Open anthonyrathe opened 2 years ago
Hi, @anthonyrathe ,
Thanks for your careful reading and interest in our work.
I agree with you and indeed there are some mistakes when we normalize the point clouds. However, i think this is an open question in point cloud completion community.
After the publication of this paper, we meet some problems when there comes a real-scan point cloud. The first challenging problem is how to normalize the real-scan objects. As you have pointed out, we can normalize the partial point clouds only based on themselves rather than the gt.
But i think the normalization or other information based on some prior knowledge is neccessary. The task of shape completion is a question with multiple solutions if no constraint exists. Normalization is one constraint, which gives a hint that which part of the object is given. ( In KITTI benckmark, the incomplete point clouds of cars are normalized based on the bbox). Or we can give some other constraint for the model to reduce the ambiguity, like category. But, i think gt-based normalization is not a good idea, which brings a lot of troubles when used in real world applications.
Thanks for pointing out this, i am also seeking a better way to solve this.
Thank you very much for your quick reply!
Interesting thought to consider normalization as a possible constraint, as I fully agree that pointcloud completion has many possible solutions hence the model could be guided by these constraints to reduce the solution space. Also, I noticed you had implemented some code to perform random scaling, so I wondered: have you tried randomly scaling the input to mitigate this normalization problem by forcing the model to become scale-independent?
Many thanks again!
Hi, sorry for the late reply. Yes, we try it when we adopt our model on real-scan dataset KITTI. However, it only can make the model adapt to various scales in a limited space.
Hi @yuxumin,
First of all, thank you very much for this excellent piece of research!
While going through the code, I had a question concerning the way you normalize the ShapeNet data: it seems like you normalize the objects to be confined within the unit sphere as you load the training and test samples in the
ShapeNet
class. You then remove a part of that normalized point cloud and serve this as the input to the model. However, in real use-cases one would normalize the occluded input point cloud, and not the ground truth. Would this not result in the model expecting ground truths that remain within the confounds of the unit sphere, which would be impossible to guarantee when provided only with the occluded input pointcloud? Hence would this not lead to data leakage and reduced performance in cases where the ground truth does exceed the confounds of the unit sphere (which will often be the case since a large part of the ground truth is removed, hence resulting in a significant likelihood of reducing the size of the bounding sphere)?I would love to hear your thoughts on this!
Many thanks in advance, @anthonyrathe