haomo-ai / MotionSeg3D

[IROS 2022] Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation
https://npucvr.github.io/MotionSeg3D/
GNU General Public License v3.0
240 stars 21 forks source link

Training on other datasets #20

Closed Ianpengg closed 1 year ago

Ianpengg commented 1 year ago

Hi thanks for sharing this great work

I'm wondering if it's possible to train on other datasets besides the ones that you've provided, such as the Oxford RobotCar dataset. it also provide the lidar point cloud data Would it be feasible to train on this dataset, and if so, what kind of information would you need from us to facilitate the training process? ex: point level label , groundtruth pose ...etc? Thanks in advance for your help!

MaxChanger commented 1 year ago

Hi @Ianpengg, thank you for your interest in our project. There's a bit of discussion here #2 and LiDAR-MOS#52, and as long as your data can be converted into the same format as KITTI Odometry (or SemanticKITTI), it can be seamlessly integrated into the framework.

Ianpengg commented 1 year ago

Thanks for replies~ I plan to first test the dataset through inference to see if the pretrained model can work for my purposes. After that, I will manually label the moving objects in the LiDAR data. Do you know if there are any efficient labeling tools or modules available that can help me quickly label the entire sequence? I estimate that there are about 10,000 to 20,000 frames that need to be labeled. Thank you.

MaxChanger commented 1 year ago

Hi, I thinks there are some helpful links in the kitti_road_mos.md.

More specifically, we first use auto-mos labeling method (link) to automatically generate the MOS labels for KITTI-Road data. We then use a point labeler (link) to manually refined the labels.

Ianpengg commented 1 year ago

Thanks, this issue can be closed~