mabaorui / NeuralPull

Implementation of ICML'2021:Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces
MIT License
179 stars 28 forks source link

questions about surface Reconstruction #14

Closed QtEngineer closed 2 years ago

QtEngineer commented 2 years ago

Hi, Recently, I have read the pytorch code of your work,but I canot fully understand how to reconstrcut surface. According to code,It doesnot use any information in GT_mesh and even the query_point is random generated. How to make sure that our generated mesh can correspond to GT_mesh one by one? Could you please answer my question?Thank you so much !

mabaorui commented 2 years ago

Hi, Please refer to Equation 1 and Equation 2 in the paper for details of how this work without GT mesh, and to the code for the query point sampling rules. https://arxiv.org/pdf/2011.13495.pdf https://github.com/mabaorui/NeuralPull/blob/9932b3a18f79db52fd1455f8cd220cf1c7b378a9/sample_query_point.py#L143 https://github.com/bearprin/neuralpull-pytorch/blame/6757bd26d5d006b4168952158ec67a299f8766ba/dataset/train_dataset.py#L65

QtEngineer commented 2 years ago

HI, I read the paper again .the method is so novel and your impressive work inspires me a lot .Unfortunately,for I am a newhand on 3D reconstruction,I am still confused that I can understand how it works at training step but inference. 【 sample query locations around each point pj of the ground truth point cloud P .】So it needs point P to sample query_point? I don‘t whether I understand it correctly . At inference step , code class ValDataset(data.Dataset): def init(self, bd=0.55, resolution=128): super(ValDataset, self).init() shape = (resolution, resolution, resolution) vxs = torch.arange(-bd, bd, bd 2 / resolution) vys = torch.arange(-bd, bd, bd 2 / resolution) vzs = torch.arange(-bd, bd, bd 2 / resolution) pxs = vxs.view(-1, 1, 1).expand(shape).contiguous().view(resolution 3) pys = vys.view(1, -1, 1).expand(*shape).contiguous().view(resolution * 3) pzs = vzs.view(1, 1, -1).expand(shape).contiguous().view(resolution 3) self.p = torch.stack([pxs, pys, pzs], dim=1).reshape(resolution, resolution ** 2, 3) seems doesnot allow the sampling rules. I noticed that [Details. We employ Neural-Pull to reconstruct 3D surfaces from point clouds. Given a point cloud P , we do not lever- age any condition c in Fig. 1 and overfit the neural network to the shape by minimizing the loss in Eq. 2, where we re- move the network for extracting the feature of the condition.] on 4.1. But ValDatastes seems doesnot load any mesh,which conflicts with 【Given a point cloud P】. If you could give me some help .I'll be veay grateful

mabaorui commented 2 years ago

Hi, If you are only confused about the inference phase, please refer to the Marching cubes algorithm. We learned the implicit function field through the sampling query points mentioned above in the training phase. During testing, we simply apply the Marching cubes algorithm to extract the mesh. The code of ValDataset is a discretization of the implicit function field learned in our training phase into the input of Marching cubes. https://en.wikipedia.org/wiki/Marching_cubes

QtEngineer commented 2 years ago

Hi, I saw a iuuse at pytorch code repo,on which the writer ever suggested 【 suggest fitting one shape by one network though the Implementation support fitting several shapes by condition code(feat). This will reduce the difficulty of convergence. Just put only one .npy in the 'npy' folder.】 Is it because 【we do not lever-age any condition c in Fig. 1 and overfit the neural networkto the shape by minimizing the loss in Eq. 2, 】? I haven't learned tensor-flow ,so I am not sure whether there's difference with tensorflow code. I'm looking forward to your earliest reply. Thanks a lot for the time you take !

mabaorui commented 2 years ago

Hi, It should not be intrinsically linked to the condition c, but multiple models learned simultaneously will make the network more difficult to learn and take longer time to converge. We may release the official pytorch implementation of NeuralPull later, but we've been busy lately. I think the quality of implementation by bearprin is high, even though it hasn't been fully validated.

QtEngineer commented 2 years ago

Hi, Thanks alot for your answer.So our code can learned multiple shapes right?I also think the quality of pytorch coide is so high,It realized mplementation of NeuralPull with only a little amount of code lines and is easily to be read. I have also read some other traditional methods,like DeepSDF. For example ,in DeepSDF, it splits datasets as train and test,and need both pont(x,y,z) and latent vector(latant code ) to get sdf(x). What makes me confused is that, how can our repo distinguish different shapes while evaluating without any input by training net just once ,such as mesh1_horse mesh2_rabbit ...?

mabaorui commented 2 years ago

Hi, Yes, NeuralPull can learned multiple shapes. We use a onehot vector to represent which model to reconstruct. https://github.com/bearprin/neuralpull-pytorch/blame/6757bd26d5d006b4168952158ec67a299f8766ba/train.py#L106

QtEngineer commented 2 years ago

Hi, Your answer is really helpful to me!! Now I understand how NeuralPull works . from what I understand,the next code line 107 【feat[0, :, shape_ind] = 1】mekes the onehot vactor works,and in fact ,the number of query_point of ValDateset is 128^3,which equals to MarchingCube resolution so that ensures contain full pointcloud P. Have I understood correctly?

mabaorui commented 2 years ago

Hi, You are right. If you find our code or paper useful, please consider giving the repository a star. Thanks.

QtEngineer commented 2 years ago

Hi, I really appreciate your help.I have already star this repo