google-research / deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Apache License 2.0
1.01k stars 159 forks source link

DeepLab2: A TensorFlow Library for Deep Labeling

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks, including, but not limited to semantic segmentation, instance segmentation, panoptic segmentation, depth estimation, or even video panoptic segmentation.

Deep labeling refers to solving computer vision problems by assigning a predicted value for each pixel in an image with a deep neural network. As long as the problem of interest could be formulated in this way, DeepLab2 should serve the purpose. Additionally, this codebase includes our recent and state-of-the-art research models on deep labeling. We hope you will find it useful for your projects.

Change logs

Installation

See Installation.

Dataset preparation

The dataset needs to be converted to TFRecord. We provide some examples below.

Some guidances about how to convert your own dataset.

Projects

We list a few projects that use DeepLab2.

Colab Demo

Note that the exported models used in all the demos are in CPU mode.

Running DeepLab2

See Getting Started. In short, run the following command:

To run DeepLab2 on GPUs, the following command should be used:

python trainer/train.py \
    --config_file=${CONFIG_FILE} \
    --mode={train | eval | train_and_eval | continuous_eval} \
    --model_dir=${BASE_MODEL_DIRECTORY} \
    --num_gpus=${NUM_GPUS}

Contacts (Maintainers)

Please check FAQ if you have some questions before reporting the issues.

Disclaimer

Citing DeepLab2

If you find DeepLab2 useful for your project, please consider citing DeepLab2 along with the relevant DeepLab series.

@article{deeplab2_2021,
  author={Mark Weber and Huiyu Wang and Siyuan Qiao and Jun Xie and Maxwell D. Collins and Yukun Zhu and Liangzhe Yuan and Dahun Kim and Qihang Yu and Daniel Cremers and Laura Leal-Taixe and Alan L. Yuille and Florian Schroff and Hartwig Adam and Liang-Chieh Chen},
  title={{DeepLab2: A TensorFlow Library for Deep Labeling}},
  journal={arXiv: 2106.09748},
  year={2021}
}

References

  1. Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. "The cityscapes dataset for semantic urban scene understanding." In CVPR, 2016.

  2. Andreas Geiger, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." In CVPR, 2012.

  3. Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyrill Stachniss, and Jurgen Gall. "Semantickitti: A dataset for semantic scene understanding of lidar sequences." In ICCV, 2019.

  4. Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, and Piotr Dollar. "Panoptic segmentation." In CVPR, 2019.

  5. Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. "Video panoptic segmentation." In CVPR, 2020.

  6. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C Lawrence Zitnick. "Microsoft COCO: Common objects in context." In ECCV, 2014.

  7. Patrick Dendorfer, Aljosa Osep, Anton Milan, Konrad Schindler, Daniel Cremers, Ian Reid, Stefan Roth, and Laura Leal-Taixe. "MOTChallenge: A Benchmark for Single-camera Multiple Target Tracking." IJCV, 2020.

  8. Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. "Scene Parsing through ADE20K Dataset." In CVPR, 2017.