Song Wang, Jianke Zhu*, Ruixiang Zhang
This is the official implementation of Meta-RangeSeg: LiDAR Sequence Semantic Segmentation Using Multiple Feature Aggregation [Paper] [Video].
Prediction | Groud Truth | |
---|---|---|
Perspective View | ||
Bird's-Eye View |
Model | Task | mIoU(paper) [on test set] |
mIoU(reprod.) [on test set] |
Results |
---|---|---|---|---|
Meta-RangeSeg | multiple scans semantic segmentation | 49.5 | 49.7 | valid_pred test_pred |
Meta-RangeSeg | single scan semantic segmentation | 61.0 | 60.3 | valid_pred test_pred |
Please download the original SemanticKITTI dataset from the official website.
For residual image generation, we provide an online version but adopt the offline one in the actual training. Please refer to LiDAR-MOS for more details. Thanks for their great work!
You can run the following command to test the performance of Meta-RangeSeg:
cd ./train/tasks/semantic
python infer.py -d ./data/semantic_kitti/dataset -m ../../../logs
To train the model from scratch, you can run:
CUDA_VISIBLE_DEVICES=0,1 python train.py -d ./data/semantic_kitti/dataset -ac ../../../meta_rangeseg.yml
This project is heavily based on SalsaNext and LiDAR-MOS. RangeDet and FIDNet are also excellent range-based models, which help us a lot.
@article{wang2022meta,
title={Meta-RangeSeg: LiDAR Sequence Semantic Segmentation Using Multiple Feature Aggregation},
author={Wang, Song and Zhu, Jianke and Zhang, Ruixiang},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={4},
pages={9739--9746},
year={2022},
publisher={IEEE}
}