clinplayer / Point2Skeleton

Point2Skeleton: Learning Skeletal Representations from Point Clouds (CVPR2021)
MIT License
206 stars 38 forks source link

questions #12

Closed zouwenqin closed 2 years ago

zouwenqin commented 2 years ago

I am really appreciate about your work, I have a few questions

  1. Is there an special correlation between the final skeleton mesh and point cloud? how can I get the original point cloud from the skeleton mesh ?
  2. how do you get skeleton meshs from a sequence in Skeleonization with consistent correspondence of More Applications? predict skeleton mesh per frame or other ways?
clinplayer commented 2 years ago

Hey!

  1. There are two methods. (1) In MeshUtil.py, we have a function named "rand_sample_points_on_skeleton_mesh", and you can use it to densely sample a large quantity of 3D spheres on a skeletal mesh to approximate the input shape surface (see line 62-63 in test.py). Then you can use a virtual scanner (https://github.com/wang-ps/O-CNN/tree/master/virtual_scanner) to extract the surface points. This method is simple but time-consuming. (2) Given a skeletal triangle with spheres centered at its vertices, you can calculate the two common tangent triangles to represent the interpolation. For edge segments, it is similar, while you need to calculate a tanget cone. Just like the figure below. 111

  2. You just need to predict the skeletal mesh for each frame and the network will learn the correspondence automatically, e.g., the location of the predicted k-th vertex is almost consistent for all the inputs. Note to use this property, your input sequences should be aligned first.