nutonomy / nuscenes-devkit

The devkit of the nuScenes dataset.
https://www.nuScenes.org
Other
2.25k stars 624 forks source link

render_pointcloud_in_image logic #976

Closed ZOUYIyi closed 1 year ago

ZOUYIyi commented 1 year ago

since image is trigered when the lidar sweep the center of image, and the timestamp of the pointcloud is at the finish of the sweep ,why there is still involves time and ego pose to do the render_pointcloud_in_image

whyekit-motional commented 1 year ago

@ZOUYIyi this is because you would need to project the points (which are in the lidar frame) into the image (which is in the camera frame), and this is done by transforming the frame of the points in this order:

  1. Transform points in lidar frame to ego frame
  2. Transform points in ego frame to global frame
  3. Transform points in global frame to ego frame of the timestamp of the image
  4. Transform points in ego frame of the timestamp of the image to camera frame
ZOUYIyi commented 1 year ago

@ZOUYIyi this is because you would need to project the points (which are in the lidar frame) into the image (which is in the camera frame), and this is done by transforming the frame of the points in this order:

  1. Transform points in lidar frame to ego frame
  2. Transform points in ego frame to global frame
  3. Transform points in global frame to ego frame of the timestamp of the image
  4. Transform points in ego frame of the timestamp of the image to camera frame

thanks for reply ,but i think the true reson is that the pointcloud is be transformed by motion distortion. if it is just the ori pointcloud ,we can just aply 1,4 step ,since 2,3 is just about time align

whyekit-motional commented 1 year ago

If the timestamp at which the lidar data is captured is exactly the same at which the camera data is captured, then probably doing only steps 1 and 4 will suffice

But there is usually a slight difference, so going through steps 1, 2, 3 and 4 gives a more accurate projection

ZOUYIyi commented 1 year ago

as the nusence says ,the camera trigered when the lidar sweep just in the center of the image, so there is no need to do 2,3 if we use the ori pointcloud .but i suppose that the pointcloud is undistored by motion, so we need 2,3.

ZOUYIyi commented 1 year ago

If the timestamp at which the lidar data is captured is exactly the same at which the camera data is captured, then probably doing only steps 1 and 4 will suffice

But there is usually a slight difference, so going through steps 1, 2, 3 and 4 gives a more accurate projection

the timestamp of the lidar is just the sweep end , and the sweep cant be as quick as image capture, that is what i think is the key
of the design

whyekit-motional commented 1 year ago

Yes @ZOUYIyi that is correct - the capture times of the lidar and the camera are slightly different