dliu5812 / PDAM

【CVPR 2020 & IEEE Transactions on Medical Imaging 2021】PDAM: Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images
MIT License
21 stars 4 forks source link

PDAM: Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images

In this project, we proposed a Panoptic Domain Adaptive Mask R-CNN (PDAM) for unsupervised instance segmentation in microscopy images.

The implementations are for our previous two papers:

Unsupervised Instance Segmentation in Microscopy Images via Panoptic Domain Adaptation and Task Re-Weighting, CVPR 2020.

PDAM: A Panoptic-Level Feature Alignment Framework for Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images, IEEE Transactions on Medical Imaging.

Introduction and Installation

Please follow maskrcnn-benchmark to set up the environment. In this project, the Pytorch Version 1.4.0 and CUDA 10.1 are used.

Data

Data Introduction

In this work, we use four datasets:

Histopathology Images: TCGA-Kumar, and TNBC. Please download them from link.

For the testing images in TCGA-Kumar dataset, we rename them in the inference and evaluation process. Please refer to Link for details.

Fluorescence Microscopy Images: BBC039 Version 1. Download from this link.

Electron Microscopy Images: EPFL, and VNC.

If you are using these datasets in your research, please also remember to cite their original work.

Data preparation

All the data should be put in ./dataset. For the detailed path of each dataset, please refer to:

./maskrcnn_benchmark/config/path_catalog.py

Here we provide some sample images on adaptation from BBBC039V1 to TCGA-Kumar (fluo2tcga).

Note that the instance annotations are stored in .json files following the MSCOCO format. If you want to generate the annotations by yourself, please follow this repository.

Model training

First, follow our paper to generate synthesized patches using CycleGAN.

Next, implement the nuclei inpainting mechanism by running python auxiliary_nuclei_inpaint.py. We have several demo results in ./nuclei_inpaint.

For training the model on three UDA settings in our papers, please refer to:

./train_gn_pdam.sh.

Model inference and Evaluation

The code for this part is in ./inference. Just list the settings from BBBC039V1 to TCGA-Kumar as an example:

To get the instance segmentation prediction, run python fluo2tcga_infer.py. Remember to manually set the path of the pre-trained weights, testing images, and output folder.

To evaluate the segmentation performance under AJI, pixel-f1, and Panoptic Quality (PQ), please run python fluo2tcga_eva.py. The overall results for all the testing images will be saved in a .xls file.

To visualize the instance-level mask annotations/predictions, please run python color_instance.py.

Citations (Bibtex)

Please consider citing our papers in your publications if they are helpful to your research:

@inproceedings{liu2020unsupervised,
  title={Unsupervised instance segmentation in microscopy images via panoptic domain adaptation and task re-weighting},
  author={Liu, Dongnan and Zhang, Donghao and Song, Yang and Zhang, Fan and O'Donnell, Lauren and Huang, Heng and Chen, Mei and Cai, Weidong},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={4243--4252},
  year={2020}
}
@article{liu2020pdam,
  title={PDAM: A Panoptic-level Feature Alignment Framework for Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images},
  author={Liu, Dongnan and Zhang, Donghao and Song, Yang and Zhang, Fan and O’Donnell, Lauren and Huang, Heng and Chen, Mei and Cai, Weidong},
  journal={IEEE Transactions on Medical Imaging},
  year={2020},
  publisher={IEEE}
}

Thanks to the Third Party Repositories

maskrcnn-benchmark

pytorch-CycleGAN-and-pix2pix

quip_cnn_segmentation

hover_net

Contact

Please contact Dongnan Liu (dongnanliu0201@gmail.com) for any questions.

License

PDAM is released under the MIT license. See LICENSE for additional details.