bharat-b7 / IPNet

Repo for "Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction, ECCV'20(Oral)""
226 stars 27 forks source link

Boundary points of single view data? #34

Closed heathentw closed 2 years ago

heathentw commented 2 years ago

As I understood, the input "points" was sampled from the boundary of the mesh, which is reconstructed from complete scan of a real world object; my question is how to get the sampled points (for input) when we only have the points cloud from a single view (e.g. one depth camera)? as we don't have the complete mesh reconstructed, we can't sample the area which have no depth point clouds right?

Thank you for your time.

bharat-b7 commented 2 years ago

Boundary sampling is done only at training because at inference you have to query the entire 256^3 grid. For training with single view, you will have a complete shape for supervision during training. You can sample points from that.

heathentw commented 2 years ago

Just to make sure does the training data all synthetic? so the partially scan real data will not involve during training

bharat-b7 commented 2 years ago

Yes, you'll need full shape to supervise. This is to teach your network to complete the shape.

heathentw commented 2 years ago

I see. Thank you very much.

heathentw commented 2 years ago

@bharat-b7 May I ask for training IPNetMANO, do you use MANO parametric hand to produce fake data? will there be a domain gap between real/fake data? if so how will you guys handle it. thanks.