Open XuM007 opened 3 months ago
I met the same problem. Point cloud is good but when using focal and pose to project to 2D image, the image is strange.
Hi, it seams that this is the coordinate problem. Camera pose from poses = scene.get_im_poses()
is at camera-to-world coordinate, considering convert it to world coordinate inv(poses)
if you want to operate it with pointclouds.
Hi, it seams that this is the coordinate problem. Camera pose from
poses = scene.get_im_poses()
is at camera-to-world coordinate, considering convert it to world coordinateinv(poses)
if you want to operate it with pointclouds.
Thank you for your suggestion, but if poses = scene.get_im_poses()
gets camera-to-world coordinate, then the point cloud (in the camera coordinate, i got with RGBD camera) can be dot-multiplied with it to get the positions of different views in the world coordinate. I think this attempt is reasonable.
In the meantime, I also tried your suggestion, utilizing np.linalg.inv(pose)
with the point cloud gives the following result.
As you can see, unfortunately, this assumption did not successfully solve the problem I encountered.
Okay, thanks for your feedback. Could you share more details about the pointcloud data (files) and how you operate with the pose and the pointcloud you obtained?
Thank you very much for your help and sorry that my reply was so untimely. Below I provide the data I used during testing. In the following test.zip, view1.jpg and view2.jpg are input images. I get their camera_pose through usage and save it as pose.json. Then I used the two point clouds obtained by the depth camera and transformed them using the obtained pose. I used the following code:
import open3d as o3d
ply1 = o3d.io.read_point_cloud("test/view1.ply")
ply2 = o3d.io.read_point_cloud("test/view2.ply")
with open("test/pose.json") as f:
pose = np.array(json.load(f))[0]
# ply2.transform(pose)
ply1.transform(pose)
o3d.visualization.draw_geometries([ply1, ply2])
but got the following results:
However, when I run the demo on the web page, I get the following results:
It can be seen that the results when I actually operate the pose are not ideal, but I don't know whether this is a problem with my operation, or whether the point cloud results presented by the model do not exactly correspond to the pose estimation results. I apologize again for my lack of timely reply. If you are willing, you can also contact me at xum007007@gmail.com.
@XuM007 is this issue resolved ?
It has not been solved yet. This seems to be because the point cloud depth of the model is estimated, so the camera pose also corresponds to the estimated depth. Therefore, there is a scale difference with the actual result, which makes the camera pose invalid in the real data. This may not be improved.
Thanks, for answering my query @XuM007
Thank you for your excellent work. But when I try to use cam_pose, some problems arise. Specifically, for the four pictures, the following pictures can be obtained in the output of the demo,
and you can see that there are good camera poses.
But when I apply the pose parameters directly to the point cloud locally, I get the following results.
There is an unreasonable gap between each perspective.
I want to know if the way I get the pose is wrong(
poses = scene.get_im_poses()
)? Or is it that the point cloud results displayed on the web page do not completely correspond to the pose obtained by the model? Looking forward to your reply.