autonomousvision / sdfstudio

A Unified Framework for Surface Reconstruction
Apache License 2.0
1.98k stars 185 forks source link

Unable to reproduce dtuscan65 depth values #102

Open iahmedmaher opened 1 year ago

iahmedmaher commented 1 year ago

Describe the bug I am trying to regenerate the depth npys found inside data/sdfstudio-demo-data/dtu-scan65. I used the below command and provided the latest omnidata repo and weights. The produced npys have very similar values to the original dataset with errors in the range of ~0.001. However, when I train monosdf using these depth, the depth loss does not converge and the resulting mesh is just a flat surface without any artifact resembling a skull.

python extract_monocular_cues.py \ --omnidata_path {omnidata_path} \ --pretrained_model {pretrained_models} \ --img_path {output_dir} --output_path {output_dir} \ --task depth

To Reproduce Use the command above to produce the depth npys inside data/sdfstudio-demo-data/dtu-scan65

Expected behavior I should get the same results as that of the original npys.

niujinshuchong commented 1 year ago

@iAhmedMaher I think 0.001 differences is acceptable. For why using the depth loss is not working well, please refer to https://github.com/autonomousvision/sdfstudio/issues/68#issuecomment-1484270533

iahmedmaher commented 1 year ago

@niujinshuchong Thanks for you reply. What I do not understand is why the default data works well, but my depth data generated using omnidata does not do the trick. Do you have any thoughts on what could be happening?

niujinshuchong commented 1 year ago

@iAhmedMaher Do you mean you train the default data with mono depth loss and it works well?

iahmedmaher commented 1 year ago

yes exactly. With the default data, a skull is clear in the produced mesh after 2000 iterations. With my depth, the mesh is just a flat plane.

niujinshuchong commented 1 year ago

That's very strange. Could you try to replace the provided depth *.npy files with your newly generated depth files and try again? Or is that what you did?

iahmedmaher commented 1 year ago

that's what I did, and surprisingly, if I add a Gaussian noise of mean 0 and std of 0.0005 to the default depth npys (the ones that work well), I get a mesh which looks like a flat plane.

niujinshuchong commented 1 year ago

Could you share your training command?

iahmedmaher commented 1 year ago

Sure, here it is

ns-train monosdf --trainer.max-num-iterations 2000 --trainer.steps-per-save 100 --pipeline.model.sdf-field.inside-outside False --vis viewer --experiment-name exp_name sdfstudio-data --data <path to modified data folder> --include_mono_prior True

niujinshuchong commented 1 year ago

You need to add --pipeline.datamanager.train-num-images-to-sample-from 1 --pipeline.datamanager.train-num-times-to-repeat-images 0 to the command to use monocular depth loss since we need to compute the scale and shift between the rendered depth map and monocular depth map and therefore all the rays should come from the same image. (But still surprise that in some datasets e.g. replica it also works even we sample training rays across all images.)

iahmedmaher commented 1 year ago

Thanks, did not know that, will use this command and write here if it solves the issue