PRBonn / semantic_suma

SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)
MIT License
923 stars 205 forks source link

question about data type #22

Closed Juuustin closed 4 years ago

Juuustin commented 4 years ago

Hi, I'd like to try my own lidar points cloud on this algorithm. What kind of data type is needed to test?

Chen-Xieyuanli commented 4 years ago

Hey @Juuustin, we currently only tested SuMa++ with the KITTI Odometry dataset. You could find more data type information on their original website.

For testing your own dataset, the semantic inferring may not work very well since probably the sensor model and the environments where you obtain the laser data are quite different from those of KITTI. You may need to first train or finetune a new model to correctly infer the semantic information, which you could find more information in the rangenet++ project.

Hope this helps.

Juuustin commented 4 years ago

Hey @Juuustin, we currently only tested SuMa++ with the KITTI Odometry dataset. You could find more data type information on their original website.

For testing your own dataset, the semantic inferring may not work very well since probably the sensor model and the environments where you obtain the laser data are quite different from those of KITTI. You may need to first train or finetune a new model to correctly infer the semantic information, which you could find more information in the rangenet++ project.

Hope this helps.

Thank you for your reply. I had a look at the KITTI Odometry dataset, there are at least 4 type of dataset: Download odometry data set (grayscale, 22 GB) Download odometry data set (color, 65 GB) Download odometry data set (velodyne laser data, 80 GB) Download odometry data set (calibration files, 1 MB) I am wondering which is used in this algorithm, and is it possible to use laser date only? Also, what type of input is needed? for example, do we need to transform it into ROS bag?

Chen-Xieyuanli commented 4 years ago

SuMa++ is a Lidar-only SLAM method. you therefore need only Velodyne laser data to run SuMa++. If you want to later evaluate the odometry results, you need also to download the calibration files and ground truth poses.

Juuustin commented 4 years ago

SuMa++ is a Lidar-only SLAM method. you therefore need only Velodyne laser data to run SuMa++. If you want to later evaluate the odometry results, you need also to download the calibration files and ground truth poses.

Sounds great, I will have a try, thank you very much!

Chen-Xieyuanli commented 4 years ago

SuMa++ is a Lidar-only SLAM method. you therefore need only Velodyne laser data to run SuMa++. If you want to later evaluate the odometry results, you need also to download the calibration files and ground truth poses.

Sounds great, I will have a try, thank you very much!

Thank you for using our code. I will appreciate it a lot if you could star&fork this repo or cite the related paper if possible.

Juuustin commented 4 years ago

SuMa++ is a Lidar-only SLAM method. you therefore need only Velodyne laser data to run SuMa++. If you want to later evaluate the odometry results, you need also to download the calibration files and ground truth poses.

Sounds great, I will have a try, thank you very much!

Thank you for using our code. I will appreciate it a lot if you could star&fork this repo or cite the related paper if possible.

No problem, I have stared it and I will cite it if it could run on my computer

Chen-Xieyuanli commented 4 years ago

SuMa++ is a Lidar-only SLAM method. you therefore need only Velodyne laser data to run SuMa++. If you want to later evaluate the odometry results, you need also to download the calibration files and ground truth poses.

Sounds great, I will have a try, thank you very much!

Thank you for using our code. I will appreciate it a lot if you could star&fork this repo or cite the related paper if possible.

No problem, I have stared it and I will cite it if it could run on my computer

Thanks, if you have any problem please feel free to drop me a msg.