Closed pjckoch closed 3 years ago
Hi @pjckoch
I forgot the reason of doing that. They might be redundant steps, and I don't have the nuscenes dataset on my laptop to check it now. You can verify what will happen if you change that part to camera directly.
One possible reason is that the target camera sensor might not have the same timestamp as the radar sensor, so we have to transform these point cloud to the world coordinate first. So either we transform these radar points to LiDAR (having the highest capture frequency), or we transform them separately (5 radar sensor) to the world coordinate using their own extrinsics.
Best
Hi @brade31919 ,
another question from my side. It's not a bug report, I just want to get an understanding of how you are processing the data: In your dataset generation, you are aggregating multiple radar sweeps like that:
https://github.com/brade31919/radar_depth/blob/5e6e75772ff379aac65379a50d4042a7c64c869d/dataset/nuscenes_dataset.py#L709-L711
May I know why you set
ref_chan
to"LIDAR_TOP"
here? Does it make any difference? Because in the same function, the radar points are then transformed from lidar frame to camera frame:https://github.com/brade31919/radar_depth/blob/5e6e75772ff379aac65379a50d4042a7c64c869d/dataset/nuscenes_dataset.py#L757-L774
You would get the same result if you used the radar as
ref_chan
and then transformed from radar to camera frame, right?Thanks in advance and best regards, Patrick