In my project, I can only get the points on the surface of the object. At present, I have completed the training with ShapeNet's data.
During inference, I made the following changes: I firstly pre-processed ShapeNet's test set (surface samples) to get .ply files. Because these points have 0 SDF value, I just wrote the coordinates of these points and the 0 SDF value of into . npz files. Then I used the new generated .npz file to learn latent code during inference time. But in this way, the reconstructed mesh seemed not similar to ground truth.
I wonder if there is anything unreasonable about what I did?
This is a very clever idea.
However, I think this will lose the scalability of the function. Using only zero values to reverse the latent vector will make the function lose the learning of external value expansion
In my project, I can only get the points on the surface of the object. At present, I have completed the training with ShapeNet's data.
During inference, I made the following changes: I firstly pre-processed ShapeNet's test set (surface samples) to get .ply files. Because these points have 0 SDF value, I just wrote the coordinates of these points and the 0 SDF value of into . npz files. Then I used the new generated .npz file to learn latent code during inference time. But in this way, the reconstructed mesh seemed not similar to ground truth.
I wonder if there is anything unreasonable about what I did?