Closed cshang412 closed 2 months ago
Thanks to such an excellent work of open source, I want to combine the original image with the predicted elevation map for visualization, how can I align the RGB image information with the inference results? Do you have such a visual script?
You can project the elevation points onto image plane. Refer to the related code in dataset dev kit.
`calib_params = read_calib_params('I:/data/dev_kit/calibration/calib_20230406.pkl') image_path = 'XXX/train/2023-04-08-02-33-11/left/20230408023909.400.jpg' pcd_path = 'XXX/train/2023-04-08-02-33-11/pcd/20230408023909.400.pcd' image = cv2.imread(image_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) cloud = o3d.io.read_point_cloud(pcd_path) cloud = np.asarray(cloud.points)
uv, depth_uv = project_point2camera(calib_params, cloud) dis_map, depth_map = get_dis_depth_map(uv, depth_uv, calib_params)
show_image_with_points(uv, depth_uv, image) show_clouds_with_color(image, depth_map, calib_params)`
Thank you for the reply. Could you clarify if we are replacing the elevation data in the point cloud file located at pcd_path
with our predicted results? Also, how is the region of interest (ROI) within the image determined?
We have updated the test.py, which now saves the inference results for visualization.