Zhimin Yuan, Wankang Zeng, Yanfei Su, Weiquan Liu, Ming Cheng, Yulan Guo, Cheng Wang
ASC, Xiamen University
This is a official code release of DGT-ST. This code is mainly based on MinkowskiEngine.
I was trying to clean the code to make it more readable. However, the cleaned code of PCAN can not reproduce the results shown in our paper. It may caused by the instability of the GAN training. Now, we are working on a new direction to obtain a warm-up model that is more stable than the GAN-based methods. So, we are not going to make too much effort for this. The code in this repo is only slightly cleaned but still nasty.
The code was submitted in a hurry. Please contact me if you have any questions.
We published a new repo focusing on self-training-based methods for 3D outdoor driving scenario LiDAR point clouds UDA segmentation.
conda create -n py3-mink python=3.8
conda activate py3-mink
conda install openblas-devel -c anaconda
# pytorch
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
# MinkowskiEngine==0.5.4
pip install -U git+https://github.com/NVIDIA/MinkowskiEngine -v --no-deps --install-option="--blas_include_dirs=${CONDA_PREFIX}/include" --install-option="--blas=openblas"
pip install tensorboard
pip install setuptools==52.0.0
pip install six
pip install pyyaml
pip install easydict
pip install gitpython
pip install wandb
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.10.1+cu111.html
pip install tqdm
pip install pandas
pip install scikit-learn
pip install opencv-python
pip install other packages if needed.
Our released implementation is tested on
Please refer to DGT.md for the details.
SourceOnly_DGTST
) or directly download the pretrained model and organize the downloaded files as follows
DGT_ST
├── preTraModel
│ ├── Syn2SK
│ │ │── SourceOnly
│ │ │── stage_1_PCAN
│ │ │── stage_2_SAC_LM
│ ├── Syn2Sp
│ │ │── SourceOnly
│ │ │── stage_1_PCAN
│ │ │── stage_2_SAC_LM
├── change_data
├── configs
PRETRAIN_PATH
carefully in the config file
.SynLiDAR -> SemanticKITTI:
We use checkpoint_val_Sp.tar
as the pre-trained model.
Note: The GAN training is unstable. If you can not reproduce the results in our paper, please re-train this model.
python train_DGT_ST.py --cfg=configs/SynLiDAR2SemanticKITTI/stage_1_PCAN.yaml
We use checkpoint_val_target_Sp.tar
, the model pretrained by PCAN, as the pre-trained model.
python train_DGT_ST.py --cfg=configs/SynLiDAR2SemanticKITTI/stage_2_SAC_LM.yaml
In order to be consistent with CoSMix, we use checkpoint_epoch_10.tar
as the pre-trained model.
python train_DGT_ST.py --cfg=configs/SynLiDAR2SemanticKITTI/LaserMix.yaml
python train_DGT_ST.py --cfg=configs/SynLiDAR2SemanticKITTI/ADVENT.yaml
For example, test the semanticKITTI
.
test the semanticPOSS
:
We release the checkpoints of SynLiDAR -> SemanticKITTI and SynLiDAR -> SemanticPOSS. You can directly use the provided model for testing. Then, you will get the same results as in Tab.1 and Tab. 2 in our paper.
Thanks for the following works for their awesome codebase.
Although LaserMix is proposed for semi-supervised segmentation tasks, I found it very effective in 3D UDA segmentation tasks. The experimental results show that it is very stable during the training stage. We speculate that the spatial prior helps bridge the domain gap in UDA task.
We use the method proposed by LiDAR Distillation (code) to obtain the beam-label.
@inproceedings{DGTST,
title={Density-guided Translator Boosts Synthetic-to-Real Unsupervised Domain Adaptive Segmentation of 3D Point Clouds},
author={Zhimin Yuan, Wankang Zeng, Yanfei Su, Weiquan Liu, Ming Cheng, Yulan Guo, Cheng Wang},
booktitle={CVPR},
year={2024}
}