facebookresearch / DeepSDF

Learning Continuous Signed Distance Functions for Shape Representation
MIT License
1.41k stars 259 forks source link

Questions regarding to the "Shape Completion" experiments #16

Closed JiamingSuen closed 5 years ago

JiamingSuen commented 5 years ago

Hello @jjparkcv and @tschmidt23, thanks for sharing this great work. I've finished the model training on "chairs" class and have a few questions about the shape completion experiments in the paper:

  1. Are the models in the shape completion experiments trained separately using only partial(single-view) point cloud input? Or I can just reuse the "complete sampling" version of training data(as preprocessing code published in this repo).
  2. Do you also use sdf_gt during inference for shape completion(even for noisy depth input)? Is it possible to use zeros as sdf_gt for point cloud input sampled only from the object surface?

For the second question I experimented a little bit, the result is not quite as expected. This is the input point cloud: image and this is the reconstructed mesh: image image

If this is possible, any ideas on what I did wrong?

Thanks a lot!

tschmidt23 commented 5 years ago

It looks like there is a scale factor mismatch between your point samples and your reconstruction. For good results, make sure you're always using SDF samples in the canonical space.

JiamingSuen commented 5 years ago

Thanks for your reply. I calibrated the scale factor and performed the experiment again. This time the result is plausible, but the reconstruction is still much worse than the result presented in the paper, as shown in this example(the same point cloud input is used): image I want to know more details about how the shape completion experiment is conducted:

  1. Do I have to prepare special training data to retrain the model for single-view shape completion?
  2. Is single-view point cloud(as shown in paper Figure 8a) the only input data for the shape completion experiment? If I get it right, sdf_gt=0 should be used in latent code optimization during inference, and no other random sampled point cloud in the cube as additional input.
JiamingSuen commented 5 years ago

@tschmidt23 @jjparkcv It would be nice if you can respond at your earliest convenience, thanks!

JiamingSuen commented 5 years ago

After using input data generated from the cpp preprocessing code, the shape completion result is better. It seems that the network is more sensitive to input data than I expected. image image I'm working on single-view depth input completion experiment and I'm closing this issue for now. However, it would be really nice if you can give me a confirmed answer to my questions.

HM102 commented 5 years ago

Hey @JiamingSuen ,

Did you use the same inference code, with the input being the partial model? also did you generate ground truth SDF for the partial shape and then use it during the inference for optimization?

JiamingSuen commented 5 years ago

Hey @JiamingSuen ,

Did you use the same inference code, with the input being the partial model? also did you generate ground truth SDF for the partial shape and then use it during the inference for optimization?

Yes I did. I only tried to use point cloud sampled from the surface(either complete or partial) as input data, so all ground truth SDF would be zero. In the partial point cloud input experiment, the result is worse than complete input but still reasonable considering the network has never seen these partial inputs during training.

moshanATucsd commented 5 years ago

Hi @JiamingSuen, could you kindly advice how should we "calibrated the scale factor" for point clouds?

yjcaimeow commented 3 years ago

@JiamingSuen

Hi, JIaming

"After using input data generated from the cpp preprocessing code, the shape completion result is better. It seems that the network is more sensitive to input data than I expected."

So for the partial point cloud completion , you try to generate some points whose sdf value is not zero? And Could you advice how you generate ?

Best, Yingjie