Closed ascacdsaa closed 1 month ago
Thank you for your attention!
Here is our Python script for processing the KITTI-360 dataset: KITTI-360_process_script.zip. In summary, we convert the LiDAR files into rangeview-like images to use as an additional modality. We also convert the labels into train IDs, so the number of classes will be 19, aligned with the paper.
Rangeview-like images are illustrated as follows:
This zip file contains two scripts and a folder with a modified version of the official KITTI-360 GitHub repository scripts. After installing the requirements from the official KITTI-360 GitHub repository, you can use bash KITTI-360_pcd2rangeview.sh
to convert point cloud files (.pcd) in KITTI-360 to rangeview-like images (.png). Secondly, you can use python KITTI-360_semanticID2trainID.py
to convert the labels in KITTI-360 from semantic IDs to train IDs.
We will provide a processed version of KITTI-360 for easy use and update our code to support the KITTI-360 dataset in a few days once we finish cleaning up our code.
Update: We have uploaded the processed version of KITTI-360 lidar files in Baidu Netdisk: https://pan.baidu.com/s/1-CEL88wM5DYOFHOVjzRRhA?pwd=ij7q
Thank you very much for your patient explanation, which has been very helpful to me. I look forward to your code updates to support the KITTI-360 dataset. By the way, can you share the training/test.txt of kitti-360? Once again, I would like to express my gratitude to you!!!
We upload the training/test.txt of kitti-360 in Baidu Netdisk: https://pan.baidu.com/s/1-CEL88wM5DYOFHOVjzRRhA?pwd=ij7q
Dear author, your work is excellent. Regarding the RGB-L semantic segmentation you mentioned in your paper on KITTI-360, could you please share the details of the processed KITTI-360 dataset and training? This would be very helpful to me. Thank you! Looking forward to your reply, thank you very much!