SarahwXU / HiSup

MIT License
132 stars 20 forks source link

HiSup: Accurate polygonal mapping of buildings in satellite imagery with hierarchical supervision

Bowen Xu, Jiakun Xu, Nan Xue† and Gui-Song Xia†

(* indicates equal contributions, and † indicates the corresponding authors)


image

Highlights

cd HiSup conda develop . pip install -r requirements.txt

cd hisup/csrc/lib make

For evaluation with boundary IoU, please install boundary IoU API following [the installation instruction](https://github.com/bowenc0221/boundary-iou-api).

git clone git@github.com:bowenc0221/boundary-iou-api.git cd boundary_iou_api pip install -e .


## Quickstart with the pretrained model
You can run the following command to get quickstart.

python scripts/demo.py --dataset crowdai --img [YOUR_IMAGE_PATH]

`--dataset crowdai` means load the model pretrained on AICrowd dataset, if you want to load the model pretrained on Inria dataset, simply change `crowdai` into `inria`.

You can also run our demo using Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/162nuZq9ghB4pQQ9qsC9eZZK5Wn2qtUEW?usp=sharing) 

## Training & Testing
### Data prepare
- Download the train.tar.gz and val.tar.gz from [AICrowd dataset](https://www.aicrowd.com/challenges/mapping-challenge-old)
- Download the [Inria dataset](https://project.inria.fr/aerialimagelabeling/) and put data in the inria/raw files
- Run the inria_to_coco.py from the tools file to get training data in COCO format for Inria dataset. After the generation,
the training data should be in inria/train file.

The structure of the data file should be like: 

/data # AICrowd dataset downloaded from website |-- crowdai |-- train | |-- images | |-- annotation.json | |-- annotation-small.json |-- val | |-- images | |-- annotation.json | |-- annotation-small.json |-- inria |-- raw |-- train | |-- images | |-- gt |-- test | |-- images |-- train | |-- images | |-- annotation.json

### Training
The model with HRNetV2 as backbone are initialized with imagenet pretrained parameters. You could 
download them from https://github.com/HRNet/HRNet-Image-Classification and put them in the path of 
./hisup/backbones/hrnet_imagenet.

Single GPU training

python scripts/train.py --config-file config-files/crowdai-small_hrnet48.yaml

Multiple GPUs training

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 scripts/multi_train.py --config-file config-files/crowdai_hrnet48.yaml


### Testing 
After training, a file defined by the "OUTPUT_DIR" in the config file will appear in the catalog, which contains the trained parameters.
For the "crowdai-small_hrnet48.yaml", the trained parameters are like:

/outputs/crowdai_hrnet48 -- config.yml # saved hyper-parameters setting -- log.txt # saved experimental log -- train.log # saved training log -- model_00030.pth # parameters -- last_checkpoint # directory of the parameters' file

We also provide the pretrained models with HRNetV2-W48 as backbone on AICrowd dataset and Inria dataset. 
You can download the [pretrained models](https://drive.google.com/drive/folders/1IYAuM08Cmqp6OzHKWFv0y-gplNe2E8t2),
and put them in the right directory according to configuration files.

python scripts/test.py --config-file config-files/crowdai_hrnet48.yaml

### Evaluation
We provide implementation of different metrics for evaluation. 
You can run the following command to evaluate the test results in MS-COCO format.
The [prediction](https://drive.google.com/drive/folders/1VgOqnWfCJxic1riOtq7tT96-8w58ss7g) in json format corresponding to the validation set of AICrowd dataset is provided.

python tools/evaluation.py --gt-file [GT_ANNOTATION_FILE] --dt-file [PREDICT_ANNOTATION_FILE] --eval-type boundary_iou


## Citation
If you find our work useful in your research, please consider citing:

@article{XU2023284, author = {Bowen Xu and Jiakun Xu and Nan Xue and Gui-Song Xia}, title = {HiSup: Accurate polygonal mapping of buildings in satellite imagery with hierarchical supervision}, journal = {ISPRS Journal of Photogrammetry and Remote Sensing}, volume = {198}, pages = {284-296}, year = {2023}, issn = {0924-2716}, doi = {https://doi.org/10.1016/j.isprsjprs.2023.03.006}, }



## Acknowledgement
This repo benefits from [hawp](https://github.com/cherubicXN/hawp), 
[ECA-Net](https://github.com/BangguWu/ECANet),
[HR-Net](https://github.com/HRNet/HRNet-Image-Classification),
[boundary iou api](https://github.com/bowenc0221/boundary-iou-api),
[frame-field](https://github.com/Lydorn/Polygonization-by-Frame-Field-Learning),
[polymapper](https://github.com/lizuoyue/ETH-Thesis),
[polyworld](https://github.com/zorzi-s/PolyWorldPretrainedNetwork). We thank the authors for their great work.