I just started learning Caffe. I am planning to work on face detection for my undergrad project. I was amazed by your paper and would like to get started with your method.
Since I will be using my own dataset, I believe I have to create my own function to map my image to the annotations (I am planning to create a function to produce a .xml file in format as shown below, since my labels are in .txt format). However, I might not have a label because for my case there will be only face or non-face (background).
Would like to get your advice on how should I convert my dataset to lmdb format in order for me to start training with your model. Because if not mistaken your input blob is importing the input data from lmdb.
You should check how I create database for VOC0712/COCO/ILSVRC2016. The code supports three formats: xml/json/txt. Please refer to here for the txt format.
Hi,
I just started learning Caffe. I am planning to work on face detection for my undergrad project. I was amazed by your paper and would like to get started with your method. Since I will be using my own dataset, I believe I have to create my own function to map my image to the annotations (I am planning to create a function to produce a .xml file in format as shown below, since my labels are in .txt format). However, I might not have a label because for my case there will be only face or non-face (background). Would like to get your advice on how should I convert my dataset to lmdb format in order for me to start training with your model. Because if not mistaken your input blob is importing the input data from lmdb.