Open yxchng opened 6 years ago
We obtained 99.3% accuracy for the pretrained model in our experiment. Our reported result is not a mistake.
The reason that you only get 99.27% may vary. Usually, it is due to some mismatches in the pre-processing steps (detection and alignment), not in the recognition part.
@wy1iu I do not do my own preprocessing. I only run your preprocessing code: face_detect_demo.m and face_align_demo.m. How can the result be different when I ran your code?
I am not quite sure what causes the 0.03% accuracy drop in your case. The reason can vary. But you can retrain the model from scratch using our code. It is not difficult to reach 99.3%.
@yxchng Bingo!
fold ACC
----------------
1 99.33%
2 99.33%
3 99.00%
4 99.50%
5 98.67%
6 99.33%
7 99.17%
8 99.00%
9 100.00%
10 99.33%
----------------
AVE 99.27%
I reproduced the 99.27% with the released sphereface_model.caffemodel
.
Landmarks and cropped images are obtained with the author provided code.
What a joke.
@zeakey @yxchng First of all, I knew many people have reproduced the 99.3% accuracy using the pretrained model. You guys may do something wrong in the pipeline. It may be due to the version of cudnn or caffe, or may be the dataset issue. You guys need to figure that out yourself. Afterall, 0.03% is not really a huge gap. It seems to be some mismatched details. To be honest, I would say 99.27% and 99.3% are really not significantly different. Retraining the SphereFace-20 from scratch may even give you accuracy high than 99.3%.
However, in order to make sure that we released the correct model (and to be scientifically rigorous), I still spent some time to clone the exactly same repo and redo the whole pipeline, and I have successfully obtained 99.3% with our released SphereFace-20 model. The following is the screenshot of the testing results (with a time stamp).
Since all the computations are performed on the GPUs, my system information that may affect the performance is:
ver. 9.1
ver. 7.0.5
The images are read by Matlab, the version of my matlab is R2015b
.
The preprocessing of LFW may also have an effect on the evaluation, so is it possible for you to release the cropped LFW @wy1iu ?
@zeakey For evaluating our SphereFace-20, we used cuda8 w/o cudnn. The version may or may not be the reason. To be honest, I am not so sure. I think the problem is due to the inconsistency in the preprocessing part, since you use the pretrained model and still can not get 99.27%. You should pay attention to the preprocessing steps. I guess that the software versions may somehow affect the preprocessing.
It may sound stupid but would you upload your preprocessed dataset used for training? thanks
This is the result I get by running your code and model. You put the released model as model 3 which you mentioned in the repo to have the result of 99.3%. Is it a mistake?
Also what you mean by "going through the pipeline 5 times"? Is it that you train 5 times and get 5 models?