Closed mwang625 closed 1 year ago
Hi, MonoSDF works in Euclidean space so yes, it handle non-NDC space. I think for the blender scene, the coordinate system is x -> right, y -> up and z -> backward while we use x ->right and y -> down and z -> forward. What you need to do is something like poses[:, :3, 1:3] *= -1.
thanks for your suggestion!
Hi,
Thanks for the great work and your answers before.
I am wondering if I can train MonoSDF on the blender dataset from the original NeRF paper? In
nice_slam_apartment_to_monosdf.py
the scene is normalized into a unit cube, which means NDC coordinate if I'm not mistaken. Does it work for blender dataset, with views from the upper sphere? As mentioned in Mip NeRF:I created the cameras.npz file as suggested for the blender dataset, and transform the pose to the coordinate system in MonoSDF, but still have issues to get the correct results because of the scale_mat. Would you suggest different ways to process the Blender dataset, to make it work on MonoSDF?
Thanks in advance!