Pedro D. Marrero Fernandez1, Fidel A. Guerrero-Peña1, Tsang Ing Ren1, Jorge J. G. Leandro2
1Universidade Federal de Pernambuco, 2Motorola Mobility LLC, a Lenovo Company
In Image and Vision Computing 2018
This code implements a multiple colorChecker detection method, as described in the paper Fast and Robust Multiple ColorChecker Detection. The process is divided into two steps: (1) ColorCheckers localization and (2) ColorChecker patches recognition.
You need OpenCV v3.1.0 or later and NVidia Caffe. This installation package contains support for opencv compilation for Windows in vs.12, vs.14 and mingw.
Building the project using CMake from the command-line:
export OpenCV_DIR="./extern/opencv"
mkdir build
cd build
cmake -D OpenCV_DIR=$OpenCV_DIR ..
make
You should have done a photo or video of the ColorChecker Passport.
./build/src/mcc ../db/img-colorchecker.jpg -o=../out -t=1 -sh -gt -nc=0
./build/src/mcc ../db/vdo-colorchecker.mp4 -o=../out -t=2 -sh -gt -nc=2
./build/src/mcc ../db/sec-colchecker-0.jpg -o=../out -t=3 -sh -gt -nc=2 -me=10.0
options:
-t # application type - 1 single image, 2 video, 3 image sequence
-o # output dir - default current dir
-me # minimum error
-nc # number maximum of checker color in the image
-sh # show result
-gt # generate table .csv format
[] # input dir
usage: mccfindnet.py [-h] --configurate C [--no-gpu] [--json] [--no-show]
[--draw-cam] [--camdevice N] [--border N]
If you find this useful for your research, please cite the following paper.
@article{MARREROFERNANDEZ2018,
title = "Fast and Robust Multiple ColorChecker Detection using Deep Convolutional Neural Networks",
journal = "Image and Vision Computing",
year = "2018",
issn = "0262-8856",
doi = "https://doi.org/10.1016/j.imavis.2018.11.001",
url = "http://www.sciencedirect.com/science/article/pii/S0262885618301793",
author = "Pedro D. Marrero Fernández and Fidel A. Guerrero Peña
and Tsang Ing Ren and Jorge J.G. Leandro",
}
This work was supported by the research cooperation project between Motorola Mobility (a Lenovo Company) and CIn-UFPE. Tsang Ing Ren, Pedro D. Marrero Fernandez and Fidel A. Guerrero-Peña gratefully acknowledge financial support from the Brazilian government agency FACEPE. The authors would also like to thank Leonardo Coutinho de Mendonça, Alexandre Cabral Mota, Rudi Minghim and Gabriel Humpire for valuable discussions.