The code of paper "BASeg: Boundary Aware Semantic Segmentation for Autonomous Driving"
This repository is a official PyTorch implementation for semantic segmentation.
Requirement:
Clone the repository:
git clone git@github.com:YangParky/BASeg.git
Data preparation
Download related datasets (ADE20K, Cityscapes, CamVid) and symlink the paths to them as follows (you can alternatively modify the relevant paths specified in folder config
):
To boost the slow speed of the training, you're supposed to prepare the boundary ground truth from here.
The directory structure is the standard layout for the torchvision
/Dataset/
ADE20K/
Scene-Parsing/
ADEChallengeData2016/
images/
bound/
annotations/
Cityscapes/
bound/
gtFine/
leftImg8bit/
CamVid/
bound/
CamVid_Label/
CamVid_RGB/
/Model
/Project
/BASeg/
Train:
model
for weight initialization. sh tools/trainade.sh ade20k baseg101
Cityscapes:
sh tools/traincityscapes.sh cityscapes baseg101
CamVid:
sh tools/traincamvid.sh camvid baseg101
Test:
Download trained segmentation models and put them under folder specified in config or modify the specified paths.
For full testing (get listed performance): Validation on ADE20K
sh tools/testade.sh ade20k baseg101
Test on Cityscapes
sh tools/testcityscapes.sh cityscapes baseg101
Validation on CamVid
sh tools/testcamvid.sh camvid baseg101
For boundary evaluation: Evaluation on boundary F1_score
python util/f_boundary.py
Evaluation on interior F1_score
python util/f_interior.py
If you find the code or trained models useful, please consider citing:
@article{xiao2023baseg,
title={BASeg: Boundary aware semantic segmentation for autonomous driving},
author={Xiao, Xiaoyang and Zhao, Yuqian and Zhang, Fan and Luo, Biao and Yu, Lingli and Chen, Baifan and Yang, Chunhua},
journal={Neural Networks},
volume={157},
pages={460--470},
year={2023},
publisher={Elsevier}
}
The code is from the first author of semseg.
This repository is released under MIT License (see LICENSE file for details).