Closed HAMA-DL-dev closed 2 months ago
@HAMA-DL-dev try plotting the points (e.g. using matplotlib) before saving them to .pcd
- the intention here is to eliminate any potential issue with the saving process or the viewer you used
@whyekit-motional
Thanks to reply. :)
As you mentioned, the results of visualization using matplotlib are as follows. On the left is the original LiDAR
, and on the right is the LiDAR => RADAR_FRONT
point cloud. It’s difficult to visually determine any changes, so I printed the smallest value on the z-axis at the bottom of the image.
I expected the LiDAR => RADAR_FRONT point cloud to have a smaller z-value than before, but based on these results, it seems that this prediction was incorrect.
On a side note, the results of visualization with open3d were also the same as the results checked with pcl_viewer for the previous pcd file.
Below is the code I used for the visualization. I'm concerned that there might be an issue with the code that could be causing problems with the visualization.
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(12, 6))
ax1 = fig.add_subplot(121, projection='3d')
ax2 = fig.add_subplot(122, projection='3d')
ax1.scatter(points_lidar.points[0,:], points_lidar.points[1,:], points_lidar.points[2,:], s=0.5)
ax2.scatter(points_lidar2radar.points[0,:], points_lidar2radar.points[1,:], points_lidar2radar.points[2,:], s=0.5)
ax1.set_xlabel('X axis')
ax1.set_ylabel('Y axis')
ax1.set_zlabel('Z axis')
ax1.set_title('LiDAR point cloud')
ax2.set_xlabel('X axis')
ax2.set_ylabel('Y axis')
ax2.set_zlabel('Z axis')
ax2.set_title('LiDAR2RADAR')
plt.show()
@HAMA-DL-dev pls try plotting in the direction of either just the XZ-plane or the YZ-plane for clarity
@whyekit-motional
Thank you for your response again. I hadn't even thought about visualizing it on the XZ or YZ plane...
I checked it as you advised, and the results are as follows. But it appears similar to what I observed when saving and checking the *.pcd
file. I also checked it on the XY plane, and in this case, LIDAR=>RADAR_FRONT
appeared normal.
I apologize for the inconvenience, but could there be any issues with the code I wrote for LIDAR=>RADAR_FRONT? I'm suspecting that there might be a fundamental problem with my code, but I can't understand what the problem is.
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(18, 6))
# XZ-plane plot
ax1 = fig.add_subplot(131)
ax1.scatter(points_lidar.points[0,:], points_lidar.points[2,:], s=0.5)
ax1.scatter(points_lidar2radar.points[0,:], points_lidar2radar.points[2,:], s=0.5)
ax1.set_xlabel('X axis')
ax1.set_ylabel('Z axis')
ax1.set_title('XZ-plane: LiDAR vs LiDAR2RADAR')
# YZ-plane plot
ax2 = fig.add_subplot(132)
ax2.scatter(points_lidar.points[1,:], points_lidar.points[2,:], s=0.5)
ax2.scatter(points_lidar2radar.points[1,:], points_lidar2radar.points[2,:], s=0.5)
ax2.set_xlabel('Y axis')
ax2.set_ylabel('Z axis')
ax2.set_title('YZ-plane: LiDAR vs LiDAR2RADAR')
# XY-plane plot for LiDAR
ax3 = fig.add_subplot(133)
ax3.scatter(points_lidar.points[0,:], points_lidar.points[1,:], s=0.5)
ax3.scatter(points_lidar2radar.points[0,:], points_lidar2radar.points[1,:], s=0.5)
ax3.set_xlabel('X axis')
ax3.set_ylabel('Y axis')
ax3.set_title('XY-plane: LiDAR vs LiDAR2RADAR')
plt.show()
@HAMA-DL-dev I put together a code snippet for you to try:
import copy
import os
from typing import Any
import matplotlib.pyplot as plt
import numpy as np
from pyquaternion import Quaternion
from nuscenes.nuscenes import NuScenes
from nuscenes.utils.data_classes import LidarPointCloud, RadarPointCloud
def transform_pc_from_sensor_a_to_sensor_b(
nusc: NuScenes,
pc_to_transform: LidarPointCloud,
sensor_from: dict[str, Any],
sensor_to: dict[str, Any],
) -> LidarPointCloud:
assert sensor_from["sample_token"] == sensor_to["sample_token"]
pc = copy.deepcopy(pc_to_transform)
# First : sensor_a frame => ego frame (at timestamp of sensor_a)
cs_record = nusc.get('calibrated_sensor', sensor_from['calibrated_sensor_token'])
pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
pc.translate(np.array(cs_record['translation']))
# Second : ego frame (at timestamp of sensor_a) => global frame
poserecord = nusc.get('ego_pose', sensor_from['ego_pose_token'])
pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
pc.translate(np.array(poserecord['translation']))
# Third : global frame => ego frame (at timestamp of sensor_b)
poserecord = nusc.get('ego_pose', sensor_to['ego_pose_token'])
pc.translate(-np.array(poserecord['translation']))
pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T)
# Fourth : ego frame (at timestamp of sensor_b) => sensor_b frame
cs_record = nusc.get('calibrated_sensor', sensor_to['calibrated_sensor_token'])
pc.translate(-np.array(cs_record['translation']))
pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T)
return pc
def plot_pc(pc: LidarPointCloud, sensor_frame: str) -> None:
fig = plt.figure(figsize=(18, 6))
# XZ-plane plot
ax1 = fig.add_subplot(131)
if sensor_frame == "lidar":
mask = pc.points[1] > 0
colors = pc.points[1,:][mask]
elif sensor_frame == "radar":
mask = pc.points[1] < 0
colors = -pc.points[1,:][mask]
else:
raise ValueError(f"{sensor} is not a recognized sensor.")
ax1.scatter(pc.points[0,:][mask], pc.points[2,:][mask], s=0.5, c=colors)
ax1.set_xlim(-50, 50)
ax1.set_aspect('equal')
ax1.set_xlabel('X axis')
ax1.set_ylabel('Z axis')
ax1.set_title('XZ-plane')
# YZ-plane plot
ax2 = fig.add_subplot(132)
mask = pc.points[0] > 0
ax2.scatter(
pc.points[1,:][mask], pc.points[2,:][mask], s=0.5, c=pc.points[0,:][mask]
)
ax2.set_xlim(-50, 50)
ax2.set_aspect('equal')
ax2.set_xlabel('Y axis')
ax2.set_ylabel('Z axis')
ax2.set_title('YZ-plane')
# XY-plane plot for LiDAR
ax3 = fig.add_subplot(133)
ax3.scatter(pc.points[0,:], pc.points[1,:], s=0.5, c=pc.points[2,:])
ax3.set_xlim(-50, 50)
ax3.set_ylim(-50, 50)
ax3.set_aspect('equal')
ax3.set_xlabel('X axis')
ax3.set_ylabel('Y axis')
ax3.set_title('XY-plane')
plt.show()
nusc_ = NuScenes(version='v1.0-mini', dataroot='/data/sets/nuscenes', verbose=False)
my_sample = nusc_.sample[120]
lidar_token = my_sample['data']['LIDAR_TOP']
lidar = nusc_.get('sample_data', lidar_token)
pcl_path = os.path.join(nusc_.dataroot, lidar['filename'])
pc_before = LidarPointCloud.from_file(pcl_path)
radar_token = my_sample['data']['RADAR_FRONT']
radar = nusc_.get('sample_data', radar_token)
pc_after = transform_pc_from_sensor_a_to_sensor_b(nusc=nusc_, pc_to_transform=pc_before, sensor_from=lidar, sensor_to=radar)
print("Lidar point cloud in lidar frame")
plot_pc(pc_before, sensor_frame="lidar")
print("Lidar point cloud in radar frame")
plot_pc(pc_after, sensor_frame="radar")
The above would give the following, which seems reasonable to me:
Lidar point cloud in lidar frame
Lidar point cloud in radar frame
(Pls give it a check before using)
Thanks to your help, now I can understand what went wrong.
Lastly, I have final question :
I expected that the point cloud converted to theLIDAR => RADAR_FRONT
frame would be positioned further forward and have a lower z-value compared to before, but this was not the case. Do you know why?
More specifically, referring to the image below, it seems that only the rotation from calibration value, and not the translation, is reflected in above.
I expected that the point cloud converted to the LIDAR => RADAR_FRONT frame would be positioned further forward and have a lower z-value compared to before
Say you have a point on the road in front of the ego. That point relative to the lidar would be, say, -1.5 meters in the z-direction. However, that same point relative to the radar would be, say, -0.5 meters in the z-direction (since the radar is positioned vertically lower than the lidar).
@whyekit-motional
Understood. This has been a great help for the work I need to do moving forward.
It seems that the issues I asked about have been completely resolved.
I sincerely thank you for your prompt and thorough responses every time :bowing_woman:
Closing this issue since it has been resolved
Hello. Recently, I asked a question about the transformation from LiDAR to RADAR. Following your advice, I successfully performed the transformations from
LiDAR to image
andRADAR_FRONT to image
with the reference you provided from other issues.However, when I tried to transform from
LiDAR to RADAR_FRONT
using the same method, I found that, when viewed in the*.pcd
file, the LiDAR point cloud appeared only to be inverted vertically. ( The top of the image shows the transformed LiDAR point cloud, while the bottom shows the original.)I expect that when transforming from LiDAR to RADAR_FRONT, the point cloud will be positioned further forward and lower than before. I would also like to know if this assumption is incorrect.
I also attempted the transformation from
LIDAR => RADAR_FRONT => image
, but this resulted in the same outcome as the LiDAR to image transformation. I am unable to determine what went wrong, so I am raising the issue again.In the toggle section below, I have included the code I used for each transformation.
Lastly, I would like to reiterate that the reason for performing the transformation from LiDAR to RADAR is to train the point cloud under the assumption that the LiDAR is located at the RADAR_FRONT. Thank you.
initialization and functions (save_pcd, point_to_img)
lidar to image
radar to image
lidar to radar
lidar2radar2image