SeSame: Simple, Easy 3D Object Detection with Point-Wise Semantics \ Hayeon O, Chanuk Yang, Kunsoo Huh \ Hanyang University
[24.09.20] π Congratulations! The paper has been accepted to ACCV 2024 ! π
[24.07.31] Update existing KITTI entry due to the expiration of submission
[24.07.08] Fix bugs
[24.03.08] All result and model zoo are uploaded.
[24.02.28] The result is submitted to KITTI 3D/BEV object detection benchmark with name SeSame-point, SeSame-voxel, SeSame-pillar
spconv1.x
to spconv2.x
model | AP_easy | AP_mod | AP_hard | config | pretrained weight | result |
---|---|---|---|---|---|---|
SeSame-point | 85.25 | 76.83 | 71.60 | pointrcnn_sem_painted.yaml | pointrcnn_epoch80.pth | log |
SeSame-voxel | 81.51 | 75.05 | 70.53 | second_sem_painted.yaml | second_epoch80.pth | log |
SeSame-pillar | 83.88 | 73.85 | 68.65 | pointpillar_sem_painted.yaml | pointpillar_epoch80.pth | log |
model | AP_easy | AP_mod | AP_hard | config | pretrained weight | result |
---|---|---|---|---|---|---|
SeSame-point | 90.84 | 87.49 | 83.77 | pointrcnn_sem_painted.yaml | pointrcnn_epoch80.pth | log |
SeSame-voxel | 89.86 | 85.62 | 80.95 | second_sem_painted.yaml | second_epoch80.pth | log |
SeSame-pillar | 90.61 | 86.88 | 81.93 | pointpillar_sem_painted.yaml | pointpillar_epoch80.pth | log |
If your CUDA version is not 10.2, it might be better to install those packages on your own.
The environment.yaml
is suitable for CUDA 10.2 users.
git clone https://github.com/HAMA-DL-dev/SeSame.git
cd SeSame
conda env create -f environment.yaml
KITTI 3D object detection (link)
/path/to/your/kitti
βββ ImageSets
βββ training
βββ labels_cylinder3d # !<--- segmented point clouds from 3D sem.seg.
βββ segmented_lidar # !<--- feature concatenated point clouds
βββ velodyne # !<--- point clouds
βββ planes
βββ image_2
βββ image_3
βββ label_2
βββ calib
βββ kitti_infos_train.pkl
βββ kitti_infos_val.pkl
dataset | numbers of datset | index infos | dataset infos |
---|---|---|---|
train | 3712 / 7481 | train.txt | kitti_infos_train.pkl |
val | 3769 / 7481 | val.txt | kitti_infos_val.pkl |
test | 7518 | test.txt | N/A |
For more information of *.pkl
files, reference this documentation : mmdetection3d-create-kitti-datset
semantickitti.yaml
(link) : path to the downloaded weight
painting_cylinder3d.py
(link) : path to your KITTI and semantic-kitti configs
# point clouds from KITTI 3D object detection dataset
TRAINING_PATH = "/path/to/your/SeSame/detector/data/kitti/training/velodyne/"
# semantic map of Semantic KITTI dataset
SEMANTIC_KITTI_PATH = "/path/to/your/SeSame/detector/tools/cfgs/dataset_configs/semantic-kitti.yaml"
cd /path/to/your/kitti/training
mkdir segmented_lidar
mkdir labels_cylinder3d
cd /path/to/your/SeSame/segment/
python demo_folder.py --demo-folder /path/to/your/kitti/training/velodyne/ --save-folder /path/to/your/kitti/training/labels_cylinder3d/
python pointpainting_cylinder3d.py
cd detector/tools
python -m pcdet.datasets.kitti.sem_painted_kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/semantic_painted_kitti.yaml
cd ~/SeSame/detector/tools
python train.py --cfg_file cfgs/kitti_models/${model.yaml} --batch_size 16 --epochs 80 --workers 16 --ckpt_save_interval 5
python train.py --cfg_file cfgs/kitti_models/pointpillar_sem_painted.yaml --batch_size 16 --epochs 80 --workers 16 --ckpt_save_interval 5
If you stop the training process for mistake, don't worry.
You can resume training with option --start_epoch ${numbers of epoch}
python test.py --cfg_file ${configuration file of each model with *.yaml} --batch_size ${4,8,16} --workers 4 --ckpt ${path to *.pth file} --save_to_file
python test.py --cfg_file ../output/kitti_models/pointpillar_sem_painted/default/pointpillar_sem_painted.yaml --batch_size 16 --workers 4 --ckpt ../output/kitti_models/pointpillar_sem_painted/default/ckpt/checkpoint_epoch_70.pth --save_to_file
Thanks for the opensource codes from Cylinder3D, PointPainting and OpenPCDet