qqlu / Amodal-Instance-Segmentation-through-KINS-Dataset

Amodal-Instance-Segmentation-through-KINS-Dataset
133 stars 18 forks source link

Amodal Instance Segmentation through KINS Dataset

by Lu Qi, Li Jiang, Shu Liu, Xiaoyong Shen, Jiaya Jia.

Update! (16.02.2020)

Introduction

This repository has released the training and test set of KINS. The annotation format follows COCO style. The mask can be decoded by COCOAPI.

And the reference code of the method in CVPR 2019 paper 'Amodal Instance Segmentation through KINS Dataset' has been released. The codebase is based on pytorch-detectron. You can see some details from our released code. I am sorry that I can not transform it into the maskrcnn-benchmark with clear version.

The dataset could be downloaded from http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d. Please download left color images of object data set (http://www.cvlibs.net/download.php?file=data_object_image_2.zip).

Citation

If our method and dataset are useful for your research, please consider citing:

@inproceedings{qi2019amodal,
  title={Amodal Instance Segmentation With KINS Dataset},
  author={Qi, Lu and Jiang, Li and Liu, Shu and Shen, Xiaoyong and Jia, Jiaya},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={3014--3023},
  year={2019}
}

Contact

Please send email to qqlu1992@gmail.com.