Closed moonmoonmoonmoon closed 4 months ago
Hi @moonmoonmoonmoon, we have couple of examples that shows how to map bounding boxes to detected features in the point cloud: https://github.com/ouster-lidar/ouster-yolov5-demo. Also checkout: https://community.ouster.com/t/running-yolo-on-2d-lidarscans-using-the-sdk/73
Hi @moonmoonmoonmoon, we have couple of examples that shows how to map bounding boxes to detected features in the point cloud: https://github.com/ouster-lidar/ouster-yolov5-demo. Also checkout: https://community.ouster.com/t/running-yolo-on-2d-lidarscans-using-the-sdk/73
Thank you for your help! I checked the examples and links you provided, and the provided code offers inference of pedestrians in images based on the YOLOv5 model and draws bounding boxes on the image, it doesn't seem to synchronize with drawing the corresponding boxes on the point cloud.
In addition, what I want to know is the reverse process. Specifically, I have the coordinates of a box in the point cloud, and I would like to map it onto an image and draw it. That is the process of converting point cloud data (Cartesian coordinates) to image data (polar coordinates). Do you have any examples or suggestions for this?
Thank you very much!
Although this is possible but not very trivial, you will need prior knowledge of sensor intrinsics and (also extrinsics) and then traceback every point in your point cloud to the corresponding (U, V) from where the laser is emitted. Every point will be mapped to a pixel on a 2D image. Unfortunately, I don't have an example that shows this process at hand.
Although this is possible but not very trivial, you will need prior knowledge of sensor intrinsics and (also extrinsics) and then traceback every point in your point cloud to the corresponding (U, V) from where the laser is emitted. Every point will be mapped to a pixel on a 2D image. Unfortunately, I don't have an example that shows this process at hand.
Thank you for your prompt reply! However, I think since the images are directly obtained from LiDAR scans, there is no need for an intrinsic matrix. Instead, the transformation can be achieved using the provided sensor metadata and the LiDAR's scanning configuration. Based on the provided parameters, I was able to obtain the U and V values by selecting the points with the minimum distance. However, the shape of the bounding box in the image appears irregular. By the way, do you know the Ouster SDK currently does not provide functionality to directly map 3D bounding boxes from the point cloud to 2D image coordinates in its built-in functions, right?
Hello everyone,
I am working on a project using Ouster LiDAR . I have combined the depth, ambient, and range images obtained from LiDAR scans into a single composite RGB image. Then I need to map bounding boxes, which are identified in the point cloud data, to the corresponding positions on the combined image.
Could anyone guide me on which API or interface to use to accomplish this and provide some details on how to implement it?
Thank you in advance for your assistance!