autonomousvision / monosdf

[NeurIPS'22] MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
MIT License
573 stars 53 forks source link

Does MonoSDF handle non-NDC coordinate? #52

Closed mwang625 closed 1 year ago

mwang625 commented 1 year ago

Hi,

Thanks for the great work and your answers before.

I am wondering if I can train MonoSDF on the blender dataset from the original NeRF paper? In nice_slam_apartment_to_monosdf.py the scene is normalized into a unit cube, which means NDC coordinate if I'm not mistaken. Does it work for blender dataset, with views from the upper sphere? As mentioned in Mip NeRF:

"NDC coordinates can only be used for these "forward-facing" scenes; in scenes where the camera rotates significantly (which is the case for the vast majority of 3D datasets) NeRF uses conventional 3D "world coordinates""

I created the cameras.npz file as suggested for the blender dataset, and transform the pose to the coordinate system in MonoSDF, but still have issues to get the correct results because of the scale_mat. Would you suggest different ways to process the Blender dataset, to make it work on MonoSDF?

Thanks in advance!

niujinshuchong commented 1 year ago

Hi, MonoSDF works in Euclidean space so yes, it handle non-NDC space. I think for the blender scene, the coordinate system is x -> right, y -> up and z -> backward while we use x ->right and y -> down and z -> forward. What you need to do is something like poses[:, :3, 1:3] *= -1.

mwang625 commented 1 year ago

thanks for your suggestion!