autonomousvision / monosdf

[NeurIPS'22] MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
MIT License
573 stars 53 forks source link

about prepocess #56

Open UestcJay opened 1 year ago

UestcJay commented 1 year ago

Hi,

Thanks for the great work ! because 384 is the size for Omnidata-model, the dtu image size is 1200x1600, if I want to use monocular cues with original size, can I first resize the 1200x1600 -> 1152x1536, then get the monocular cues and upsamle them to 1200x1600? looking forward to your reply!

niujinshuchong commented 1 year ago

Hi, we also simply resize the monocular outputs to 1200x1200 with padding for dtu images with 1200x1600. You could check it here: https://github.com/autonomousvision/monosdf/blob/main/preprocess/paded_dtu.py. Other way to get high-resolution monocular priors can be found here https://github.com/autonomousvision/monosdf#high-resolution-cues.

UestcJay commented 1 year ago

I still have some problems, is my method more convenient than the way in thepaded_dtu.py, because there is no need to modify the parameters of the camera?

niujinshuchong commented 1 year ago

You could just try it out.

UestcJay commented 1 year ago

How many experiments are averaged for the CD value on the DTU dataset reported in the paper?

niujinshuchong commented 1 year ago

It's average over 15 scenes.

Wuuu3511 commented 1 year ago

Hi,

Thanks for the great work ! because 384 is the size for Omnidata-model, the dtu image size is 1200x1600, if I want to use monocular cues with original size, can I first resize the 1200x1600 -> 1152x1536, then get the monocular cues and upsamle them to 1200x1600? looking forward to your reply!

Helllo! I'd like to ask a question. Omnidata-model is trained with img_size 384. Can it support input at any image resolution such as 1152*1536? Thank you!

UestcJay commented 1 year ago

yes, as long as the length and width are multiples of 384.

Wuuu3511 commented 1 year ago

yes, as long as the length and width are multiples of 384.

Thank you very much for your reply! I try to use images 512 640 as input , Omnidata-model can also return a depth map which is 512640. Picture of this size is not a multiple of 384. Does this result in a larger depth error?

liuxiaozhu01 commented 1 year ago

Hello! I've got question here. I am wondering whether the resolution of rgb images, depth and normal cues will impact on the reconstruction result. If it will, and why? Thank you for your reply! My experiment result really confused me.

niujinshuchong commented 1 year ago

Hi, omnidata is not trained on large resolution images. So it's not clear whether it can generalise in this case and the reconstruction results might vary scene by scene.