This code is an implement of the face recognition algorithm introduced in paper Deep Learning Face Representation from Predicting 10,000 Classes.
train_model: training models and solvers for deepid feature extractor and face recognizer
deploy_model: deploy models for deepid feature extractor and face recognizer
model_values: trained model files
src: source codes
make -j9
You can skip the training if you just want to do face recognition with this project because all pretrained deepid caffemodel files are given in model values directory
request CASIA-webface dataset.
crop patches from datasets as described in the paper
in root directory
./generate_training_sample -i <path/to/webface> -o <path/to/training patches for 60 models>
in root directory
./convert -i <path/to/training patches for 60 models> -o <path/to/lmdbs>
enter each generated lmdb directory for patches of one of 60 local facial areas and train with caffe.
caffe train -solver deepid_solver.prototxt -gpu all
in root directory
./move_training_results -i <path/to/lmdbs> -o model_values
in root directory
./generate_training_samples -i <path/to/webface> -o <path/to/lmdb of deepid feature>
train with caffe on the generated lmdb of deepid feature. My deepid code can only achieve 70% accuracy, trained and tested on webface.
./transform -i <path/to/lmdb of deepid feature> -o <path/to/lmdb of dimension reduced deepid features>
This operation takes me around 3 weeks on my workstation to finish.
train with caffe on the generated lmdb of dimension reduced deepid feature. The accuracy is even lower, only 65% accuracy.
put (at least one) picture(s) for each person into individual directories and put all the directories into one parent directory.
in root directory
extract deepid feaures from faces
./main -m train -i <path/to/the parent directory>
recognize with knn from pictures captured from webcam
./main -m test -p 训练参数.dat