I used the map_pointcloud_to_image() to convert the radar point cloud data in point sensor frame to ego vehicle frame. (Working on the first sample of scene 0 my_sample = nusc.sample[0])
Also using the same function projected the radar points on image
Now I want to project the corner points of the bounding box on the same cartesian coordinates (as in picture 1) How can I do this?
I tried to do the exact reverse of what is given in map_pointcloud_to_image(), however it seems not possible due to missing depth information for view_points() function. And I tried the below code
def map_pointcloud_to_image():
points = np.load('Path to corner points of bounding box')
cs_record = nusc.get('calibrated_sensor', cam['calibrated_sensor_token'])
pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
pc.translate(np.array(cs_record['translation']))
However, these points are not in alignment with the radar points in ego vehicle frame as shown in picture 1. How can I project the corner points of bounding box to the ego vehicle frame such that both radar points and corner points of bounding box can be viewed in a common coordinate system and are comparable (Note: I would like to achieve this without involving Lidar)?
@Pavithree I don't think you can project the bounding boxes in a 2D image (which I assume came from an image-based object detector) into the 3D space due to the lack of depth information
Hello,
I used the map_pointcloud_to_image() to convert the radar point cloud data in point sensor frame to ego vehicle frame. (Working on the first sample of scene 0 my_sample = nusc.sample[0])
Also using the same function projected the radar points on image
Now I want to project the corner points of the bounding box on the same cartesian coordinates (as in picture 1) How can I do this?
I tried to do the exact reverse of what is given in map_pointcloud_to_image(), however it seems not possible due to missing depth information for view_points() function. And I tried the below code
However, these points are not in alignment with the radar points in ego vehicle frame as shown in picture 1. How can I project the corner points of bounding box to the ego vehicle frame such that both radar points and corner points of bounding box can be viewed in a common coordinate system and are comparable (Note: I would like to achieve this without involving Lidar)?
Thank you!