mcdviral / mcdviral.github.io

MIT License
30 stars 2 forks source link

Recommendations for Sensors, Data Synchronization, and Coordinate System Unification in Joint Training with Camera, LiDAR, and Semantic Data Using the MCD Dataset #4

Closed Ru-Yan closed 2 weeks ago

Ru-Yan commented 1 month ago

Thank you for the work on this dataset. I read your paper "MCD: Diverse Large-Scale Multi-Campus Dataset for Robot Perception," and found several aspects of your work particularly intriguing. I am writing to seek your guidance on a few specific questions.

1.If I want to jointly use camera, LiDAR, and semantic data for training, which cameras and sensors would you recommend?

2.I noticed that the data acquisition frequencies for the camera, LiDAR, and IMU are different, making it difficult to obtain synchronized data for training. How should I handle data synchronization and training in this situation?

3.For joint training, it is necessary to unify the coordinate systems of the camera, LiDAR, and IMU. Does the MCD dataset provide relevant parameters for this purpose? Thank you very much for taking the time to read my issue. I would greatly appreciate any advice you could offer.

brytsknguyen commented 3 weeks ago

Hi @Ru-Yan,

Sorry for the late reply

1.If I want to jointly use camera, LiDAR, and semantic data for training, which cameras and sensors would you recommend?

I suggest using the D455b RGB which are available on both ATV and HHS sequences, for lidar we have annotated the Livox pointclouds. We are planning to label the Ouster pointcloud in the future. Perhaps you can work on some camera-to-lidar annotation pipeline, we will be happy to feature your work on our site.

2.I noticed that the data acquisition frequencies for the camera, LiDAR, and IMU are different, making it difficult to obtain synchronized data for training. How should I handle data synchronization and training in this situation?

For lidar, we provide a conitinuous-time groundtruth (CTGT), so you can write some script to accumulate the lidar scans, then use the CTGT to deskew and export new lidar scans that start at the same time with camera. I have a tutorial on how to deskew the point cloud here.

Synchronizing the IMU data should also be easy, you can simply buffer the IMU and export them based on the camera timestamps. I have write a ROS node to sync between IMU and camera here

3.For joint training, it is necessary to unify the coordinate systems of the camera, LiDAR, and IMU. Does the MCD dataset provide relevant parameters for this purpose?

Yes, it is necessary to unify the coordinates. The transforms of camera, lidar and IMU data to a common body frame is provided in two yaml files here.

Ru-Yan commented 2 weeks ago

Thank you so much for your detailed and insightful response!

Your response was excellent and very helpful to me. I have one more small question: Regarding the point cloud annotation data, I downloaded a file named “ntupriormap_labelled_withtreetrunks.bin” in .bin format. However, I am unable to read the annotated point cloud because I do not know the exact format of the annotated data. Could you please clarify the specific format of this file so that I can properly read the annotated point cloud?

Thank you once again for your support and guidance!!!

snakehaihai commented 2 weeks ago

Cloudcompare will do the trick

Ru-Yan commented 2 weeks ago

Cloudcompare will do the trick Thank you for your response! CloudCompare will do the trick. I will close the issue now.