Closed oneleggedredcow closed 5 months ago
The documentation of cameras_from_opencv_projection
at https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/utils/camera_conversions.py#L57 suggests that the fourth argument, image size, should have height before width. If I change the call to that function in your example to
camera = cameras_from_opencv_projection(
torch.FloatTensor(R).unsqueeze(0),
torch.FloatTensor(t).unsqueeze(0),
torch.FloatTensor(K).unsqueeze(0),
torch.Tensor([h, w]).unsqueeze(0),
)
then the output looks okay. I don't think there's anything wrong here.
🐛 Bugs / Unexpected behaviors
The image produced by MeshRenderer does not align with the vertices projected by PerspectiveCameras.transform_points_screen.
Instructions To Reproduce the Issue:
This fairly simple example shows the issue:
This code will produce the following image:
I expected the red dots to align with the vertices from MeshRenderer, but they do not.
The PerspectiveCameras.transform_points_screen result appears to be correct.
The problem only occurs when the image_size is non-square.
Possible Fix:
Looking at the rasterize_meshes_python function. If I change the code to be this:
And then create and use a PythonMeshRasterizer class where the only difference from MeshRasterizer is that instead of calling rasterize_meshes, I call the modified rasterize_meshes_python given above and it seems to work as expected.
However, this is a bit of hackery and not really a proper fix.