PyTorch implementation for self-supervised co-part segmentation.
Copyright (C) 2019 NVIDIA Corporation. All rights reserved. Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
The code is developed based on Pytorch v0.4 with TensorboardX as visualization tools. We recommend to use virtualenv to run our code:
$ virtualenv -p python3 scops_env
$ source scops_env/bin/activate
(scops_env)$ pip install -r requirements.txt
To deactivate the virtual environment, run $ deactivate
. To activate the environment again, run $ source scops_env/bin/activate
.
Download data (Saliency, labels, pretrained model)
$ ./download_CelebA.sh
Download CelebA unaligned from here.
$ ./evaluate_celebAWild.sh
and accept all default options. The results are stored in a single webpage at results_CelebA/SCOPS_K8/ITER_100000/web_html/index.html
.
$ CUDA_VISIBLE_DEVICES={GPU} python train.py -f exps/SCOPS_K8_retrain.json
where {GPU}
is the GPU device number.
Note: The model is trained with two main differences in the master branch: 1) it is trained with ground truth silhouettes rather than saliency maps. 2) it crops birds w.r.t bounding boxes rather than using the original image.
First set image and annotation path in line 35 and line 37 in dataset/cub.py
. Then run:
sh eval_cub.sh
Results as well as visualizations could be found in the results/cub/ITER_60000/train/
folder.
Please consider citing our paper if you find this code useful for your research.
@inproceedings{hung:CVPR:2019,
title = {SCOPS: Self-Supervised Co-Part Segmentation},
author = {Hung, Wei-Chih and Jampani, Varun and Liu, Sifei and Molchanov, Pavlo and Yang, Ming-Hsuan and Kautz, Jan},
booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = june,
year = {2019}
}