Closed Wang-Xiaodong1899 closed 1 year ago
What exactly are the shapes of the data you have for intrinsic, extrinsic1 and extrinsic2? I would think you should create camera1 and camera2 in similar ways.
An example: Image-1: Image-2:
Intrinsic matrix: [[ 0.90174353, 0. , 0.5. ], [ 0. , 1.6036477 , 0.5. ], [ 0. , 0. , 1. ]]
Extrinsic1 matrix for image-1: [[ 0.95821565, -0.00884962, -0.2859099 , -0.10209601], [-0.01124797, 0.99758255, -0.06857479, -0.0267839 ], [ 0.2858256 , 0.06892534, 0.9557997 , -0.18922901]]
Extrinsic2 matrix for image-2: [[ 0.9352163 , -0.00962112, -0.35394606, -0.10288198], [-0.01790597, 0.9970666 , -0.07441484, -0.0257245 ], [ 0.35362378, 0.07593172, 0.9323007 , -0.238874 ]]
How to render image-1 to the position of image-2 using Pytorch3d. I would like to get the warped image that has some masked regions.
Thanks a lot!
I think you may already have the right code for making the cameras. It depends on exactly what your data means, and you can compare that with our camera documentation.
You need a depth map of image 1. If you don't have a depth, then construct a constant depth map. Then make a point cloud with the function pytorch3d.implicitron.tools.point_cloud_utils.get_rgbd_point_cloud and camera1. Then render the point cloud with camera2 and a pytorch3d.renderer.PointsRenderer (or PulsarPointsRenderer) to get image2.
❓ Questions on how to use PyTorch3D
Hi. I tried to find some solution for rendering image1 to image2, but I could not find an answer.
I am new in this field.
Details:
Given: image1, intrinsic, extrinsic1, extrinsic1 If I have some given code:
How can I define the second Camera to render to get the warped_image at image2 position:
or
Thanks a lot.