nutonomy / nuscenes-devkit

The devkit of the nuScenes dataset.
https://www.nuScenes.org
Other
2.29k stars 630 forks source link

LiDAR to RADAR Transformation: Unexpected Results #1107

Closed HAMA-DL-dev closed 2 months ago

HAMA-DL-dev commented 2 months ago

Hello. Recently, I asked a question about the transformation from LiDAR to RADAR. Following your advice, I successfully performed the transformations from LiDAR to image and RADAR_FRONT to image with the reference you provided from other issues.

image

image

However, when I tried to transform from LiDAR to RADAR_FRONT using the same method, I found that, when viewed in the *.pcd file, the LiDAR point cloud appeared only to be inverted vertically. ( The top of the image shows the transformed LiDAR point cloud, while the bottom shows the original.)

Screenshot from 2024-08-20 17-17-14

I expect that when transforming from LiDAR to RADAR_FRONT, the point cloud will be positioned further forward and lower than before. I would also like to know if this assumption is incorrect.

I also attempted the transformation from LIDAR => RADAR_FRONT => image, but this resulted in the same outcome as the LiDAR to image transformation. I am unable to determine what went wrong, so I am raising the issue again.

image

In the toggle section below, I have included the code I used for each transformation.

Lastly, I would like to reiterate that the reason for performing the transformation from LiDAR to RADAR is to train the point cloud under the assumption that the LiDAR is located at the RADAR_FRONT. Thank you.

initialization and functions (save_pcd, point_to_img)
```python # Scene number my_scene = nusc.scene[11] first_sample_token = my_scene['first_sample_token'] my_sample = nusc.get('sample', first_sample_token) # Sesors lidar_token = my_sample['data']['LIDAR_TOP'] lidar = nusc.get('sample_data', lidar_token) pcl_path = osp.join(nusc.dataroot, lidar['filename']) pc = LidarPointCloud.from_file(pcl_path) points_lidar = pc radar_token = my_sample['data']['RADAR_FRONT'] radar = nusc.get('sample_data', radar_token) radar_path = osp.join(nusc.dataroot, radar['filename']) pc_r = RadarPointCloud.from_file(radar_path) cam_token = my_sample['data']['CAM_FRONT'] cam = nusc.get('sample_data', cam_token) im = Image.open(osp.join(nusc.dataroot, cam['filename'])) dot_size: int = 5, pointsensor_channel: str = 'LIDAR_TOP', camera_channel: str = 'CAM_FRONT', out_path: str = None, render_intensity: bool = False, show_lidarseg: bool = False, filter_lidarseg_labels: list = None, show_lidarseg_legend: bool = False, verbose: bool = True, lidarseg_preds_bin_path: str = None, show_panoptic: bool = False def save_pcd(file_path, points): with open(file_path, 'w') as f: # Write PCD header f.write('# .PCD v0.7 - Point Cloud Data file format\n') f.write('VERSION 0.7\n') f.write('FIELDS x y z\n') f.write('SIZE 4 4 4\n') f.write('TYPE F F F\n') f.write('COUNT 1 1 1\n') f.write(f'WIDTH {points.shape[0]}\n') f.write('HEIGHT 1\n') f.write('VIEWPOINT 0 0 0 1 0 0 0\n') f.write(f'POINTS {points.shape[0]}\n') f.write('DATA ascii\n') # Write point data for point in points: f.write(f'{point[0]} {point[1]} {point[2]}\n') def point_to_img(pc): min_dist: float = 1.0, render_intensity: bool = False, show_lidarseg: bool = False, filter_lidarseg_labels: list = None, lidarseg_preds_bin_path: str = None, show_panoptic: bool = False depths = pc.points[2, :] if render_intensity: assert pointsensor['sensor_modality'] == 'lidar', 'Error: Can only render intensity for lidar, ' \ 'not %s!' % pointsensor['sensor_modality'] # Retrieve the color from the intensities. # Performs arbitary scaling to achieve more visually pleasing results. intensities = pc.points[3, :] intensities = (intensities - np.min(intensities)) / (np.max(intensities) - np.min(intensities)) intensities = intensities ** 0.1 intensities = np.maximum(0, intensities - 0.5) coloring = intensities elif show_lidarseg or show_panoptic: assert pointsensor['sensor_modality'] == 'lidar', 'Error: Can only render lidarseg labels for lidar, ' \ 'not %s!' % pointsensor['sensor_modality'] gt_from = 'lidarseg' if show_lidarseg else 'panoptic' semantic_table = getattr(self.nusc, gt_from) if lidarseg_preds_bin_path: sample_token = self.nusc.get('sample_data', pointsensor_token)['sample_token'] lidarseg_labels_filename = lidarseg_preds_bin_path assert os.path.exists(lidarseg_labels_filename), \ 'Error: Unable to find {} to load the predictions for sample token {} (lidar ' \ 'sample data token {}) from.'.format(lidarseg_labels_filename, sample_token, pointsensor_token) else: if len(semantic_table) > 0: # Ensure {lidarseg/panoptic}.json is not empty (e.g. in case of v1.0-test). lidarseg_labels_filename = osp.join(self.nusc.dataroot, self.nusc.get(gt_from, pointsensor_token)['filename']) else: lidarseg_labels_filename = None if lidarseg_labels_filename: # Paint each label in the pointcloud with a RGBA value. if show_lidarseg: coloring = paint_points_label(lidarseg_labels_filename, filter_lidarseg_labels, self.nusc.lidarseg_name2idx_mapping, self.nusc.colormap) else: coloring = paint_panop_points_label(lidarseg_labels_filename, filter_lidarseg_labels, self.nusc.lidarseg_name2idx_mapping, self.nusc.colormap) else: coloring = depths print(f'Warning: There are no lidarseg labels in {self.nusc.version}. Points will be colored according ' f'to distance from the ego vehicle instead.') else: # Retrieve the color from the depth. coloring = depths # Take the actual picture (matrix multiplication with camera-matrix + renormalization). points = view_points(pc.points[:3, :], np.array(cs_record['camera_intrinsic']), normalize=True) # Remove points that are either outside or behind the camera. Leave a margin of 1 pixel for aesthetic reasons. # Also make sure points are at least 1m in front of the camera to avoid seeing the lidar points on the camera # casing for non-keyframes which are slightly out of sync. mask = np.ones(depths.shape[0], dtype=bool) mask = np.logical_and(mask, depths > min_dist) mask = np.logical_and(mask, points[0, :] > 1) mask = np.logical_and(mask, points[0, :] < im.size[0] - 1) mask = np.logical_and(mask, points[1, :] > 1) mask = np.logical_and(mask, points[1, :] < im.size[1] - 1) points = points[:, mask] coloring = coloring[mask] return points, coloring ```
lidar to image
```python # First : LIDAR => ego vehicle frame cs_record = nusc.get('calibrated_sensor', lidar['calibrated_sensor_token']) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix) pc.translate(np.array(cs_record['translation'])) # Second : ego => global poserecord = nusc.get('ego_pose', lidar['ego_pose_token']) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix) pc.translate(np.array(poserecord['translation'])) # Third : global => ego for RADAR poserecord = nusc.get('ego_pose', cam['ego_pose_token']) pc.translate(-np.array(poserecord['translation'])) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T) # Fourth : ego => RADAR cs_record = nusc.get('calibrated_sensor', cam['calibrated_sensor_token']) pc.translate(-np.array(cs_record['translation'])) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T) points_lidar2img, coloring = point_to_img(pc) # Init axes. fig, ax = plt.subplots(1, 1, figsize=(9, 16)) ax.set_title(camera_channel) ax.imshow(im) ax.scatter(points_lidar2img[0, :], points_lidar2img[1, :], c=coloring, s=dot_size) ax.axis('off') ```
radar to image
```python # First step: transform the pointcloud to the ego vehicle frame for the timestamp of the sweep. cs_record = nusc.get('calibrated_sensor', radar['calibrated_sensor_token']) pc_r.rotate(Quaternion(cs_record['rotation']).rotation_matrix) pc_r.translate(np.array(cs_record['translation'])) # Second step: transform from ego to the global frame. poserecord = nusc.get('ego_pose', radar['ego_pose_token']) pc_r.rotate(Quaternion(poserecord['rotation']).rotation_matrix) pc_r.translate(np.array(poserecord['translation'])) # Third step: transform from global into the ego vehicle frame for the timestamp of the image. poserecord = nusc.get('ego_pose', cam['ego_pose_token']) pc_r.translate(-np.array(poserecord['translation'])) pc_r.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T) # Fourth step: transform from ego into the camera. cs_record = nusc.get('calibrated_sensor', cam['calibrated_sensor_token']) pc_r.translate(-np.array(cs_record['translation'])) pc_r.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T) points_radar2img, coloring = point_to_img(pc_r) # Init axes. fig, ax = plt.subplots(1, 1, figsize=(9, 16)) ax.set_title(camera_channel) ax.imshow(im) ax.scatter(points_radar2img[0, :], points_radar2img[1, :], c=coloring, s=dot_size) ax.axis('off') ```
lidar to radar
```python pc = LidarPointCloud.from_file(pcl_path) # First : LiDAR => ego cs_record = nusc.get('calibrated_sensor', lidar['calibrated_sensor_token']) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix) pc.translate(np.array(cs_record['translation'])) # Second : ego => global poserecord = nusc.get('ego_pose', lidar['ego_pose_token']) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix) pc.translate(np.array(poserecord['translation'])) # Third : global => ego for RADAR poserecord = nusc.get('ego_pose', radar['ego_pose_token']) pc.translate(-np.array(poserecord['translation'])) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T) # Fourth : ego => RADAR cs_record = nusc.get('calibrated_sensor', radar['calibrated_sensor_token']) pc.translate(-np.array(cs_record['translation'])) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T) points_lidar2radar = pc save_pcd("./demo_origin.pcd",points_lidar.points.T) save_pcd("./demo.pcd",points_lidar2radar.points.T) ```
lidar2radar2image
```python ##### LIDAR_TOP => RADAR_FRONT ##### # First : LIDAR => ego vehicle frame pc = LidarPointCloud.from_file(pcl_path) cs_record = nusc.get('calibrated_sensor', lidar['calibrated_sensor_token']) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix) pc.translate(np.array(cs_record['translation'])) # Second : ego => global poserecord = nusc.get('ego_pose', lidar['ego_pose_token']) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix) pc.translate(np.array(poserecord['translation'])) # Third : global => ego for RADAR poserecord = nusc.get('ego_pose', radar['ego_pose_token']) pc.translate(-np.array(poserecord['translation'])) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T) # Fourth : ego => RADAR cs_record = nusc.get('calibrated_sensor', radar['calibrated_sensor_token']) pc.translate(-np.array(cs_record['translation'])) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T) ##### RADAR_FRONT => CAM_FRONT ##### cs_record = nusc.get('calibrated_sensor', radar['calibrated_sensor_token']) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix) pc.translate(np.array(cs_record['translation'])) # Second : ego => global poserecord = nusc.get('ego_pose', radar['ego_pose_token']) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix) pc.translate(np.array(poserecord['translation'])) # Third : global => ego for CAM poserecord = nusc.get('ego_pose', cam['ego_pose_token']) pc.translate(-np.array(poserecord['translation'])) pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T) # Fourth : ego => CAM cs_record = nusc.get('calibrated_sensor', cam['calibrated_sensor_token']) pc.translate(-np.array(cs_record['translation'])) pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T) points_lidar2radar2img, coloring = point_to_img(pc) # Init axes. fig, ax = plt.subplots(1, 1, figsize=(9, 16)) ax.set_title(camera_channel) ax.imshow(im) ax.scatter(points_lidar2radar2img[0, :], points_lidar2radar2img[1, :], c=coloring, s=dot_size) ax.axis('off') ```
whyekit-motional commented 2 months ago

@HAMA-DL-dev try plotting the points (e.g. using matplotlib) before saving them to .pcd - the intention here is to eliminate any potential issue with the saving process or the viewer you used

HAMA-DL-dev commented 2 months ago

@whyekit-motional

Thanks to reply. :)

As you mentioned, the results of visualization using matplotlib are as follows. On the left is the original LiDAR, and on the right is the LiDAR => RADAR_FRONT point cloud. It’s difficult to visually determine any changes, so I printed the smallest value on the z-axis at the bottom of the image.

Screenshot from 2024-08-21 20-49-28

I expected the LiDAR => RADAR_FRONT point cloud to have a smaller z-value than before, but based on these results, it seems that this prediction was incorrect.

On a side note, the results of visualization with open3d were also the same as the results checked with pcl_viewer for the previous pcd file.

Below is the code I used for the visualization. I'm concerned that there might be an issue with the code that could be causing problems with the visualization.

from mpl_toolkits.mplot3d import Axes3D

fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121, projection='3d')  
ax2 = fig.add_subplot(122, projection='3d')

ax1.scatter(points_lidar.points[0,:], points_lidar.points[1,:], points_lidar.points[2,:], s=0.5)
ax2.scatter(points_lidar2radar.points[0,:], points_lidar2radar.points[1,:], points_lidar2radar.points[2,:], s=0.5)

ax1.set_xlabel('X axis')
ax1.set_ylabel('Y axis')
ax1.set_zlabel('Z axis')
ax1.set_title('LiDAR point cloud')
ax2.set_xlabel('X axis')
ax2.set_ylabel('Y axis')
ax2.set_zlabel('Z axis')
ax2.set_title('LiDAR2RADAR')

plt.show()
whyekit-motional commented 2 months ago

@HAMA-DL-dev pls try plotting in the direction of either just the XZ-plane or the YZ-plane for clarity

HAMA-DL-dev commented 2 months ago

@whyekit-motional

Thank you for your response again. I hadn't even thought about visualizing it on the XZ or YZ plane...

I checked it as you advised, and the results are as follows. But it appears similar to what I observed when saving and checking the *.pcd file. I also checked it on the XY plane, and in this case, LIDAR=>RADAR_FRONT appeared normal.

image

I apologize for the inconvenience, but could there be any issues with the code I wrote for LIDAR=>RADAR_FRONT? I'm suspecting that there might be a fundamental problem with my code, but I can't understand what the problem is.

from mpl_toolkits.mplot3d import Axes3D

fig = plt.figure(figsize=(18, 6))

# XZ-plane plot
ax1 = fig.add_subplot(131)
ax1.scatter(points_lidar.points[0,:], points_lidar.points[2,:], s=0.5)
ax1.scatter(points_lidar2radar.points[0,:], points_lidar2radar.points[2,:], s=0.5)
ax1.set_xlabel('X axis')
ax1.set_ylabel('Z axis')
ax1.set_title('XZ-plane: LiDAR vs LiDAR2RADAR')

# YZ-plane plot
ax2 = fig.add_subplot(132)
ax2.scatter(points_lidar.points[1,:], points_lidar.points[2,:], s=0.5)
ax2.scatter(points_lidar2radar.points[1,:], points_lidar2radar.points[2,:], s=0.5)
ax2.set_xlabel('Y axis')
ax2.set_ylabel('Z axis')
ax2.set_title('YZ-plane: LiDAR vs LiDAR2RADAR')

# XY-plane plot for LiDAR
ax3 = fig.add_subplot(133)
ax3.scatter(points_lidar.points[0,:], points_lidar.points[1,:], s=0.5)
ax3.scatter(points_lidar2radar.points[0,:], points_lidar2radar.points[1,:], s=0.5)
ax3.set_xlabel('X axis')
ax3.set_ylabel('Y axis')
ax3.set_title('XY-plane: LiDAR vs LiDAR2RADAR')

plt.show()
whyekit-motional commented 2 months ago

@HAMA-DL-dev I put together a code snippet for you to try:

import copy
import os
from typing import Any

import matplotlib.pyplot as plt
import numpy as np
from pyquaternion import Quaternion

from nuscenes.nuscenes import NuScenes
from nuscenes.utils.data_classes import LidarPointCloud, RadarPointCloud

def transform_pc_from_sensor_a_to_sensor_b(
    nusc: NuScenes, 
    pc_to_transform: LidarPointCloud, 
    sensor_from: dict[str, Any], 
    sensor_to: dict[str, Any],
) -> LidarPointCloud:
    assert sensor_from["sample_token"] == sensor_to["sample_token"]

    pc = copy.deepcopy(pc_to_transform) 

    # First : sensor_a frame => ego frame (at timestamp of sensor_a)
    cs_record = nusc.get('calibrated_sensor', sensor_from['calibrated_sensor_token'])
    pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
    pc.translate(np.array(cs_record['translation']))

    # Second : ego frame (at timestamp of sensor_a) => global frame 
    poserecord = nusc.get('ego_pose', sensor_from['ego_pose_token'])
    pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
    pc.translate(np.array(poserecord['translation']))

    # Third : global frame => ego frame (at timestamp of sensor_b)
    poserecord = nusc.get('ego_pose', sensor_to['ego_pose_token'])
    pc.translate(-np.array(poserecord['translation']))
    pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T)

    # Fourth : ego frame (at timestamp of sensor_b) => sensor_b frame
    cs_record = nusc.get('calibrated_sensor', sensor_to['calibrated_sensor_token'])
    pc.translate(-np.array(cs_record['translation']))
    pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T)

    return pc

def plot_pc(pc: LidarPointCloud, sensor_frame: str) -> None:
    fig = plt.figure(figsize=(18, 6))

    # XZ-plane plot
    ax1 = fig.add_subplot(131)
    if sensor_frame == "lidar":
        mask = pc.points[1] > 0
        colors = pc.points[1,:][mask]
    elif sensor_frame == "radar":
        mask = pc.points[1] < 0
        colors = -pc.points[1,:][mask]
    else:
        raise ValueError(f"{sensor} is not a recognized sensor.")
    ax1.scatter(pc.points[0,:][mask], pc.points[2,:][mask], s=0.5, c=colors)
    ax1.set_xlim(-50, 50)
    ax1.set_aspect('equal')
    ax1.set_xlabel('X axis')
    ax1.set_ylabel('Z axis')
    ax1.set_title('XZ-plane')

    # YZ-plane plot
    ax2 = fig.add_subplot(132)
    mask = pc.points[0] > 0
    ax2.scatter(
        pc.points[1,:][mask], pc.points[2,:][mask], s=0.5, c=pc.points[0,:][mask]
    )
    ax2.set_xlim(-50, 50)
    ax2.set_aspect('equal')
    ax2.set_xlabel('Y axis')
    ax2.set_ylabel('Z axis')
    ax2.set_title('YZ-plane')

    # XY-plane plot for LiDAR
    ax3 = fig.add_subplot(133)
    ax3.scatter(pc.points[0,:], pc.points[1,:], s=0.5, c=pc.points[2,:])
    ax3.set_xlim(-50, 50)
    ax3.set_ylim(-50, 50)
    ax3.set_aspect('equal')
    ax3.set_xlabel('X axis')
    ax3.set_ylabel('Y axis')
    ax3.set_title('XY-plane')

    plt.show()

nusc_ = NuScenes(version='v1.0-mini', dataroot='/data/sets/nuscenes', verbose=False)

my_sample = nusc_.sample[120]

lidar_token = my_sample['data']['LIDAR_TOP']
lidar = nusc_.get('sample_data', lidar_token)

pcl_path = os.path.join(nusc_.dataroot, lidar['filename'])
pc_before = LidarPointCloud.from_file(pcl_path)

radar_token = my_sample['data']['RADAR_FRONT']
radar = nusc_.get('sample_data', radar_token)

pc_after = transform_pc_from_sensor_a_to_sensor_b(nusc=nusc_, pc_to_transform=pc_before, sensor_from=lidar, sensor_to=radar)

print("Lidar point cloud in lidar frame")
plot_pc(pc_before, sensor_frame="lidar")

print("Lidar point cloud in radar frame")
plot_pc(pc_after, sensor_frame="radar")

The above would give the following, which seems reasonable to me:

Lidar point cloud in lidar frame

image

Lidar point cloud in radar frame

image

(Pls give it a check before using)

HAMA-DL-dev commented 2 months ago

Thanks to your help, now I can understand what went wrong.

Lastly, I have final question :

I expected that the point cloud converted to theLIDAR => RADAR_FRONT frame would be positioned further forward and have a lower z-value compared to before, but this was not the case. Do you know why?

image

More specifically, referring to the image below, it seems that only the rotation from calibration value, and not the translation, is reflected in above.

image

whyekit-motional commented 2 months ago

I expected that the point cloud converted to the LIDAR => RADAR_FRONT frame would be positioned further forward and have a lower z-value compared to before

Say you have a point on the road in front of the ego. That point relative to the lidar would be, say, -1.5 meters in the z-direction. However, that same point relative to the radar would be, say, -0.5 meters in the z-direction (since the radar is positioned vertically lower than the lidar).

HAMA-DL-dev commented 2 months ago

@whyekit-motional

Understood. This has been a great help for the work I need to do moving forward.

It seems that the issues I asked about have been completely resolved.

I sincerely thank you for your prompt and thorough responses every time :bowing_woman:

whyekit-motional commented 2 months ago

Closing this issue since it has been resolved