clinplayer / Point2Skeleton

Point2Skeleton: Learning Skeletal Representations from Point Clouds (CVPR2021)
MIT License
211 stars 38 forks source link

Code Or Idea For Structural Decomposition #3

Closed FishWoWater closed 3 years ago

FishWoWater commented 3 years ago

As you mentioned in Sec6 More Applications::Structural Decomposition, we can detect the dimenional changes and segment different parts of a point cloud. Can you offer the code about this part? Or can you give some detailed insights? I speculate that dimensional change means line -> triangle but I don't understand what non-manifold branch means

clinplayer commented 3 years ago

Hi, we gave illustrations for these two types of junctions in the latest version: image The non-manifold structure is like the shape "Y" where more than three line segments join together forming a vertex shared by more than three curves.

We simply use a part of the code from this work https://github.com/clinplayer/SEG-MAT However, the implementation only for detecting the line-triangle joints and "Y" joints in this paper is trivial. You don't need to necessarily follow the code of SEG-MAT which is a bit complex.

FishWoWater commented 3 years ago

OK! Thanks

FishWoWater commented 3 years ago

Sorry to trouble you again.. How did you extract point cloud from original shapeNet data(you say virtual scanner in the paper, I speculate your practice is sth like uniform sampling)? I tried to sample 2k points uniformly from original mesh and compare with yours. I found that scale and global orientation of the point cloud are generally different. So I am wondering is there any pre-processing step in your pipeline? Should it be OK to use simple uniform sampling if I want to train on my custom dataset? Thanks!

clinplayer commented 3 years ago

Sorry to trouble you again.. How did you extract point cloud from original shapeNet data(you say virtual scanner in the paper, I speculate your practice is sth like uniform sampling)? I tried to sample 2k points uniformly from original mesh and compare with yours. I found that scale and global orientation of the point cloud are generally different. So I am wondering is there any pre-processing step in your pipeline? Should it be OK to use simple uniform sampling if I want to train on my custom dataset? Thanks!

Of course, you can directly sample on the meshes of shapenet. The reason we use a virtual scanner is that it can generate consistent normals. (Note we don't need normal vectors, but the competitive methods rely on them.)

The scale and orientation are not an issue; you just need to confirm that all your training data are normalized to the same distribution. For ours, I remember we normalize the coordinates of each shape to [-1, 1].

FishWoWater commented 3 years ago

Thanks! It worked for me. @clinplayer