This repository contains the code for the CVPR paper ‘Sketch Me That Shoe’, which is a deep learning based implementation of fine-grained sketch-based image retrieval.
For more details, please visit our project page: http://www.eecs.qmul.ac.uk/~qian/Project_cvpr16.html
New: Tensorflow implementation can be found here: https://github.com/yuchuochuo1023/Deep_SBIR_tf/tree/master.
And if you use the code for your research, please cite our paper:
@inproceedings{qian2016,
Author = {Qian Yu, Feng Liu, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales and Chen Change Loy},
Title = {Sketch Me That Shoe},
Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
Year = {2016}
}
MIT License
Download the repository
git clone git@github.com:seuliufeng/DeepSBIR.git
Build Caffe and pycaffe
a. Go to folder $SBIR_ROOT/caffe_sbir
b. modify the path in Makefile.config, to use this code, you have to compile with python layer
WITH_PYTHON_LAYER := 1
c. Compile caffe
shell make –j32 && make pycaffe
Go to fold $SBIR_ROOT, and run
source bashsbir
To run the demo, please first download our database and models. Go to the root folder of this project, and run
chmod +x download_data.sh
./download_data.sh
Note: You can also download them manually from our project page: http://www.eecs.qmul.ac.uk/~qian/Project_cvpr16.html
Run the demo:
python $SBIR_ROOT/tools/sbir_demo.py
Go to the root folder of this project
cd $SBIR_ROOT
Run the command
./experiments/train_sbir.sh
Note: Please make sure the initial model ‘/init/sketchnet_init.caffemodel’ be under the folder experiments/. This initial model can be downloaded from our project page.
All provided models and codes are optimised version. And our latest result is shown below:
Dataset | acc.@1 | acc.@10 | %corr. |
---|---|---|---|
Shoes | 52.17% | 92.17% | 72.29% |
Chairs | 72.16% | 98.96% | 74.36% |
Further explanation: The model we reported in our paper is trained by our originally collected sketches which contain much noise. In order to improve usability, we cleaned the sketch images(removed some noise) after CVPR2016 deadline. You can compare images 'test_shoes_370.png' and '370.jpg' (or 'test_chairs_230.png'/'230.jpg') to see the difference. We re-trained our model using clean sketch images and the new results are listed above. Both the model and dataset we released now is the latest version. Sorry for any confusion we may bring about. If you have further questions, please email q.yu@qmul.ac.uk.
This project used codes of the following project: