This repository provides the code for the WACV20 paper Synthesizing human-like sketches from natural images using a conditional convolutional decoder. Find more details Here.
The repo runs with the included requirements.txt
.
Download pretrained models from here and put them in folder pretrained_models
After downloading the pretrained models, simply run python test.py --img_path <path-to-image> --label <label-of-image>
. See python test.py --help
for the list of parameters.
First download the data and make sure you have the pretrained models resnet_classifier.pt
and PSim_alexnet.pt
downloaded inside pretrained_models
.
Download the sketchy database from http://sketchy.eye.gatech.edu/. Get both the Sketches and Photos as well as the Annotation and Info.
Unpack into the following structure:
+-- data/
| +-- photo/
| +-- sketch/
| +-- info/
For training your own network simply run python train.py
, it runs with default parameters. See python train.py --help
for the list of parameters. The training script outputs to stdout
and generates logfiles as well as tensorboard output.
If you use anything from this work for your own, please cite
@inproceedings{kampelmuehler2020synthesizing,
title={Synthesizing human-like sketches from natural images using a conditional convolutional decoder},
author={Kampelm{\"u}hler, Moritz and Pinz, Axel},
booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
year={2020}
}