nutonomy / nuscenes-devkit

The devkit of the nuScenes dataset.
https://www.nuScenes.org
Other
2.2k stars 617 forks source link

camera/pixel coordinate to ego vehicle frame #965

Closed Pavithree closed 11 months ago

Pavithree commented 11 months ago

Hello,

I used the map_pointcloud_to_image() to convert the radar point cloud data in point sensor frame to ego vehicle frame. (Working on the first sample of scene 0 my_sample = nusc.sample[0]) image Also using the same function projected the radar points on image image Now I want to project the corner points of the bounding box on the same cartesian coordinates (as in picture 1) How can I do this?

I tried to do the exact reverse of what is given in map_pointcloud_to_image(), however it seems not possible due to missing depth information for view_points() function. And I tried the below code

def map_pointcloud_to_image():
      points = np.load('Path to corner points of bounding box')
      cs_record = nusc.get('calibrated_sensor', cam['calibrated_sensor_token'])
      pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
      pc.translate(np.array(cs_record['translation']))

However, these points are not in alignment with the radar points in ego vehicle frame as shown in picture 1. How can I project the corner points of bounding box to the ego vehicle frame such that both radar points and corner points of bounding box can be viewed in a common coordinate system and are comparable (Note: I would like to achieve this without involving Lidar)?

Thank you!

whyekit-motional commented 11 months ago

@Pavithree I don't think you can project the bounding boxes in a 2D image (which I assume came from an image-based object detector) into the 3D space due to the lack of depth information