Closed GasparPizarro closed 8 months ago
We can't help you debug this. In your inputs, there is no camera rotation: just elevation and azimuth. So the handle and the spout should be at the same level if up on the teapot is the right up for look_at_view_transformation. But the Open3D output doesn't show this. So the inputs to open3d aren't doing what you think they are.
Use the inverse of R. Seems like Pytorch3D and Open3D have different interpretations regarding rotations (camera-to-world vs world-to-camera or the other way around).
I am rendering depth maps with Pytorch3D, and, given same camera parameters and pose, they do not match the ones I get with Open3D. Using the teapot from the tutorial on camera position optimization, I get a depth map, as suggested in #35.
and I get this image:
However, if I do it with Open3D, using ray_casting, but with the same R and T (but changing axes orientations to match Pytorch3d's axes convention).
I get this:
It can be seen that the silhouettes of both renders look different (regardless of the actual depth values), which suggests to me that there is a difference with the interpretation of R and T by Open3D and Pytorch3D, or in my conversion from one to the other.
What should I do to get the same depth map from both approaches?
By the way, if I leave the pose intact from Open3d, without doing
pose[:2] = pose[:2] * -1
I get this: