Closed LLDavid closed 4 years ago
Hi @LLDavid,
to get a sequence of picked points, a relevant example is here: http://www.open3d.org/docs/tutorial/Advanced/interactive_visualization.html#manual-registration In short,
def pick_points(pcd):
print("")
print("1) Please pick at least three correspondences using [shift + left click]")
print(" Press [shift + right click] to undo point picking")
print("2) Afther picking points, press q for close the window")
vis = VisualizerWithEditing()
vis.create_window()
vis.add_geometry(pcd)
vis.run() # user picks points
vis.destroy_window()
print("")
return vis.get_picked_points()
to utilize intrinsic parameters from realsense camera, please check this script: https://github.com/IntelVCL/Open3D/blob/master/examples/Python/ReconstructionSystem/sensors/realsense_pcd_visualizer.py It will retrieve camera intrinsic from RealSense camera, and visualize point cloud correctly. In short,
def get_intrinsic_matrix(frame):
intrinsics = frame.profile.as_video_stream_profile().intrinsics
out = PinholeCameraIntrinsic(640, 480,
intrinsics.fx, intrinsics.fy, intrinsics.ppx, intrinsics.ppy)
return out
Hi @LLDavid,
to get a sequence of picked points, a relevant example is here: http://www.open3d.org/docs/tutorial/Advanced/interactive_visualization.html#manual-registration In short,
def pick_points(pcd): print("") print("1) Please pick at least three correspondences using [shift + left click]") print(" Press [shift + right click] to undo point picking") print("2) Afther picking points, press q for close the window") vis = VisualizerWithEditing() vis.create_window() vis.add_geometry(pcd) vis.run() # user picks points vis.destroy_window() print("") return vis.get_picked_points()
to utilize intrinsic parameters from realsense camera, please check this script: https://github.com/IntelVCL/Open3D/blob/master/examples/Python/ReconstructionSystem/sensors/realsense_pcd_visualizer.py It will retrieve camera intrinsic from RealSense camera, and visualize point cloud correctly. In short,
def get_intrinsic_matrix(frame): intrinsics = frame.profile.as_video_stream_profile().intrinsics out = PinholeCameraIntrinsic(640, 480, intrinsics.fx, intrinsics.fy, intrinsics.ppx, intrinsics.ppy) return out
Hi @syncle , thanks for the reply. The example shows get a sequence of points picked from 3D point cloud, but is there anyway to get a sequence of 3D points from 2D RGB points picked from the color frame?
Umm I am not sure if I understand your configuration and requirement correctly.
If your task is to get correspondences on image domain, I can think two suggestions
Actually you can avoid this trick by just make a point cloud using a RGBD image, and use a manual registration tutorial I suggested to get a transformation between two RGBD images.
I tried with using intrinsic parameters to get point directly (ref). But the visual result is a mess.
Maybe it is because the maximum depth is 9999?
@syncle Sorry for the confusion. I should make it more clear.
Here is my original question: https://github.com/IntelRealSense/librealsense/issues/2736#issue-381663068
Basically I am trying to calibtrate 4 cameras through register a set of pointclouds picked from color frame.
I tried with the mannual registration example, but the accuracy is not enough for my application.
Umm I am not sure if I understand your configuration and requirement correctly.
If your task is to get correspondences on image domain, I can think two suggestions
- One is to make a 3D planar scene using a color image. You can just use x, y coordinate as the same as image domain, and z = 0 for making the plane. With this configuration, if a point is picked, you will get a point index and its x, y value. They are the same as 2D color image coordinate.
- The second option is to make a point cloud using a RGBD image. After a point is selected, you can project the point to the image domain. Then you will get to know 2D image coordinate.
Actually you can avoid this trick by just make a point cloud using a RGBD image, and use a manual registration tutorial I suggested to get a transformation between two RGBD images.
I tried with using intrinsic parameters to get point directly (ref). But the visual result is a mess.
Maybe it is because the maximum depth is 9999?
@syncle Hi. I am trying out another method and have a small question. Is the origin of point cloud XYZ created from function 'create_rgbd_image_from_color_and_depth()' still at the left depth imager? Thanks
I am not sure what is the left image. Assuming color and depth image are aligned, the point cloud is back-projected using the aligned coordinate.
I am not sure what is the left image. Assuming color and depth image are aligned, the point cloud is back-projected using the aligned coordinate.
Hi @syncle . Sorry to bother again. I still have the question.
I am trying to make use of the demo_manual_registration example.
Suppose now I have a color image and depth image, and the point cloud is constructed based on them. The function 'pick_points()' will return the IDs of the points that you select on the point cloud.
But what if I don't select on the point cloud, but select a point on the color image, how can I get the ID for the corresponding point on the constructed point cloud ?
In other words, I am wondering when using function 'create_point_cloud_from_rgbd_image()' , how the points are ordered in the constructed point cloud?
Thanks
How is this meant to be done now that VisualizerWithEditing is depricated? I'm having an issue where I cannot add multiple geometries to VisualizerWithEditing and this issue (#239) tells me that I should no longer be using VisualizerWithEditing. What is the alternative now where I can select points and also add multiple point clouds?
Hi, I am working on a script for multi real-sense calibration . A set of points will be picked up from the aligned color frame from each cam, and I use the RGBD image pair to get the point cloud:
Then I want to use the point clouds to calculate the RT matrix. But how can I know the sequence of the picked point in the point cloud ? BTW, I tried with using intrinsic parameters to get point directly (ref). But the visual result is a mess. The visualization point cloud is from the RGBD pairs, so I guess it is better to use the same method to get point cloud for calibration. Anyone has any clue? Thanks