This repo is the implementation of "Self-Training Guided Disentangled Adaptation for Cross-Domain Remote Sensing Image Semantic Segmentation". we refer to MMSegmentation and MMGeneration. Many thanks to SenseTime and their two excellent repos.
We select Postsdam, Vaihingen and LoveDA as benchmark datasets and create train, val, test list for researchers to follow.
In the following, we provide the detailed commands for dataset preparation.
Potsdam
Move the ‘3_Ortho_IRRG.zip’ and ‘5_Labels_all_noBoundary.zip’ to Potsdam_IRRG folder
Move the ‘2_Ortho_RGB.zip’ and ‘5_Labels_all_noBoundary.zip’ to Potsdam_RGB folder
python tools/convert_datasets/potsdam.py yourpath/ST-DASegNet/data/Potsdam_IRRG/ --clip_size 512 --stride_size 512
python tools/convert_datasets/potsdam.py yourpath/ST-DASegNet/data/Potsdam_RGB/ --clip_size 512 --stride_size 512
Vaihingen
Move the 'ISPRS_semantic_labeling_Vaihingen.zip' and 'ISPRS_semantic_labeling_Vaihingen_ground_truth_eroded_COMPLETE.zip' to Vaihingen_IRRG folder
python tools/convert_datasets/vaihingen.py yourpath/ST-DASegNet/data/Vaihingen_IRRG/ --clip_size 512 --stride_size 256
LoveDA
Unzip Train.zip, Val.zip, Test.zip and create Train, Val and Test list for Urban and Rural
Sentinel-2
python tools/convert_datasets/Sentinel-2.py ./data/yourpath --out_dir ./data/Sentinel2
GID (GF-2)
python tools/convert_datasets/GF-2.py ./data/yourpath/GID/Large-scale_Classification_5classes/image_NirRGB --out_dir ./data/GID_G2R/ --clip_size 1024 --stride_size 1024
python tools/convert_datasets/GF-2.py ./data/yourpath/GID/Large-scale_Classification_5classes/label_5classes --out_dir ./data/GID_G2R/ --clip_size 1024 --stride_size 1024
CITY-OSM: CHICAGO and PARIS
python tools/convert_datasets/CITY-OSM.py ./data/yourpath/CITY-OSM/paris/ --out_dir ./data/CITY_paris/ --clip_size 512 --stride_size 512
python tools/convert_datasets/CITY-OSM.py ./data/yourpath/CITY-OSM/chicago/ --out_dir ./data/CITY_chicago/ --clip_size 512 --stride_size 512
requirements:
python >= 3.7
pytorch >= 1.4
cuda >= 10.0
prerequisites: Please refer to MMSegmentation PREREQUISITES.
cd ST-DASegNet
pip install -e .
chmod 777 ./tools/dist_train.sh
chmod 777 ./tools/dist_test.sh
mit_b5.pth : google drive For SegFormerb5 based ST-DASegNet training, we provide ImageNet-pretrained backbone here.
We select deeplabv3 and Segformerb5 as baselines. Actually, we use deeplabv3+, which is a more advanced version of deeplabv3. After evaluating, we find that deeplabv3+ has little modification compared to deeplabv3 and has little advantage than deeplabv3.
For LoveDA results, we evaluate on test datasets and submit to online server (https://github.com/Junjue-Wang/LoveDA) (https://codalab.lisn.upsaclay.fr/competitions/424). We also provide the evaluation results on validation dataset.
Potsdam IRRG to Vaihingen IRRG:
cd ST-DASegNet
./tools/dist_train.sh ./experiments/deeplabv3/config/ST-DASegNet_deeplabv3plus_r50-d8_4x4_512x512_40k_Potsdam2Vaihingen.py 2
./tools/dist_train.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Potsdam2Vaihingen.py 2
Vaihingen IRRG to Potsdam IRRG:
cd ST-DASegNet
./tools/dist_train.sh ./experiments/deeplabv3/config/ST-DASegNet_deeplabv3plus_r50-d8_4x4_512x512_40k_Vaihingen2Potsdam.py 2
./tools/dist_train.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Vaihingen2Potsdam.py 2
Potsdam RGB to Vaihingen IRRG:
cd ST-DASegNet
./tools/dist_train.sh ./experiments/deeplabv3/config/ST-DASegNet_deeplabv3plus_r50-d8_4x4_512x512_40k_PotsdamRGB2Vaihingen.py 2
./tools/dist_train.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_PotsdamRGB2Vaihingen.py 2
Vaihingen RGB to Potsdam IRRG:
cd ST-DASegNet
./tools/dist_train.sh ./experiments/deeplabv3/config/ST-DASegNet_deeplabv3plus_r50-d8_4x4_512x512_40k_Vaihingen2PotsdamRGB.py 2
./tools/dist_train.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Vaihingen2PotsdamRGB.py 2
LoveDA Rural to Urban
cd ST-DASegNet
./tools/dist_train.sh ./experiments/deeplabv3/config_LoveDA/ST-DASegNet_deeplabv3plus_r50-d8_4x4_512x512_40k_R2U.py 2
./tools/dist_train.sh ./experiments/segformerb5/config_LoveDA/ST-DASegNet_segformerb5_769x769_40k_R2U.py 2
LoveDA Urban to Rural
cd ST-DASegNet
./tools/dist_train.sh ./experiments/deeplabv3/config_LoveDA/ST-DASegNet_deeplabv3plus_r50-d8_4x4_512x512_40k_U2R.py 2
./tools/dist_train.sh ./experiments/segformerb5/config_LoveDA/ST-DASegNet_segformerb5_769x769_40k_U2R.py 2
LoveDA R-G-B Rural to LandCoverNet Sentinel-2
cd ST-DASegNet
./tools/dist_train.sh ./experiments/segformerb5/config_S2LoveDA/ST-DASegNet_segformerb5_769x769_40k_R2S.py 2
LoveDA R-G-B Rural to GID
cd ST-DASegNet
./tools/dist_train.sh ./experiments/segformerb5/config_GF2LoveDA/ST-DASegNet_segformerb5_769x769_40k_R2G.py 2
Paris to Chicago
cd ST-DASegNet
./tools/dist_train.sh ./experiments/segformerb5/config_Paris2Chicago/ST-DASegNet_segformerb5_769x769_40k_P2C.py 2
Trained with the above commands, you can get a trained model to test the performance of your model.
Testing commands
cd ST-DASegNet
./tools/dist_test.sh yourpath/config.py yourpath/trainedmodel.pth --eval mIoU
./tools/dist_test.sh yourpath/config.py yourpath/trainedmodel.pth --eval mFscore
Testing cases: P2V_IRRG_64.33.pth and V2P_IRRG_59.65.pth : google drive
cd ST-DASegNet
./tools/dist_test.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Potsdam2Vaihingen.py 2 ./experiments/segformerb5/ST-DASegNet_results/P2V_IRRG_64.33.pth --eval mIoU
./tools/dist_test.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Potsdam2Vaihingen.py 2 ./experiments/segformerb5/ST-DASegNet_results/P2V_IRRG_64.33.pth --eval mFscore
cd ST-DASegNet
./tools/dist_test.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Vaihingen2Potsdam.py 2 ./experiments/segformerb5/ST-DASegNet_results/V2P_IRRG_59.65.pth --eval mIoU
./tools/dist_test.sh ./experiments/segformerb5/config/ST-DASegNet_segformerb5_769x769_40k_Vaihingen2Potsdam.py 2 ./experiments/segformerb5/ST-DASegNet_results/V2P_IRRG_59.65.pth --eval mFscore
The ArXiv version of this paper is release. ST-DASegNet_arxiv. This paper has been published on JAG, please refer to Self-Training Guided Disentangled Adaptation for Cross-Domain Remote Sensing Image Semantic Segmentation.
If you have any question, please discuss with me by sending email to lyushuchang@buaa.edu.cn.
Many thanks to their excellent works