Closed vedrusss closed 6 years ago
Faces from LFW are easy to be detected. You should get very high accuracy if you align them by their five landmarks.
What do you mean 'by their five landmarks'? I reviewed the LFW official web page and didn't find any landmarks provided. I've got I can get very high accuracy using the landmarks. But the question is how to obtain them? Which face detector with which parameters should I use to obtain exactly the same landmarks (which can provide high recognition accuracy).
Because as I wrote, I had obtain 2% lower accuracy when aligned LFW myself using MTCNN face detector with default settings.
MTCNN can provide you the five landmarks. How did you align the face images? You can post you code here.
Yes, I use five landmarks provided by MTCNN detector. Like follows: 1) I create the detector using default parameters: detector = MtcnnDetector(model_folder=mtcnn_path, ctx=mx.gpu(0), num_worker=1, minsize=20, factor=0.709, accurate_landmark = True, threshold=[0.6,0.7,0.8]) 2) Next I use it within a function to align images: def align(img): global detector ret = detector.detect_face(img) if ret is None: return None bbox, points = ret if bbox.shape[0] == 0: return None bbox = bbox[0,0:4] points = points[0,:].reshape((2,5)).T nimg = face_preprocess.preprocess(img, bbox, points, image_size=(112,112)) nimg = cv2.cvtColor(nimg, cv2.COLOR_BGR2RGB) aligned = np.transpose(nimg, (2,0,1)) return aligned
But this way as I wrote I obtain LFW accuracy 0,97 instead of 0,99. I guess the detector parameters are not the same as you used. Could you share which detector parameters you had used?
See src/align/align_lfw.py
Ah, I've got it! I should review that file because of its name. Finally I've borrowed face detector settings from that file (thresholds and a factor) and used it within python MTCNN implementation (instead of Tensorflow). The estimates I reached against LFW are: Accuracy: 0.998+-0.000 Precision: 0.999+-0.000 Recall: 0.997+-0.001 Thank you much for your help!
May I ask one more question about splitting LFW. I guess models r34-amf and r50-am-lfw were trained using faces from LFW as well. Am I right? If yes then for evaluation only a part of LFW (not used for training) should be used. Do you have such splits (pairsLFW_Test.txt and pairsLFW_Train files)?
Thanks
training set is MS1M. LFW is only used for validation.
@nttstar ,what the different between data align code for different data ?
@vedrusss I want to make same test LFW with insight face . would you mind to share your steps and code samples for this test ?
One of the commercial (n e . c ) face recegnition company local manager argued that there is no open source solution that have %99,30+ and more.
its a matter of honour now :)
if you have simple pythonic code it would be wonderful.
Best
Ah, I've got it! I should review that file because of its name. Finally I've borrowed face detector settings from that file (thresholds and a factor) and used it within python MTCNN implementation (instead of Tensorflow). The estimates I reached against LFW are: Accuracy: 0.998+-0.000 Precision: 0.999+-0.000 Recall: 0.997+-0.001 Thank you much for your help!
May I ask one more question about splitting LFW. I guess models r34-amf and r50-am-lfw were trained using faces from LFW as well. Am I right? If yes then for evaluation only a part of LFW (not used for training) should be used. Do you have such splits (pairsLFW_Test.txt and pairsLFW_Train files)?
Thanks
Hi @vedrusss,
Would you mind to share lfw accuracy test code?
Best
Sorry, @MyraBaba , a year paste since. Would be very hard to replicate what I did
Thanks anyway .. @vedrusss
i try to figure it out now.. almost there.
have you find accuracy better than %99.98
Hi @nttstar ,
I try to test LFW accuracy as I can.
I used src/align/align_lfw.py to align face to new folder.
than I checked the positive and negative .
from 6000 :
its correctly said yes this is same person for 2823 but said 177 is not same person instead of the same person.
All the negative match correctly identified not same person all of the 3000 faces.
SO accuracy looks vey low.
Whats could be the steps that I am doing wrong ?
How I can replicate %0,998 accuracy ?
EDIT : distance threshold is : 1,0999 dist = np.sum(np.square(src_desc -compare_desc ))
PS: Is RetinaFace can provide better result for accuracy ? attached few images which said not same person:
Hi, I'm trying to replicate 99,7% accuracy of r50 model against original LFW images (not using pre-processed aligned face chips encoded into a binary file).
What I found is the resulting accuracy hardly depends on alignment. I used default MTCNN face and facial landmarks detector (MtcnnDetector.detect_face()) with default parameters (minsize=20, threshold=[0.6,0.7,0.8], factor=0.709) instead of mentioned within demo scripts (MtcnnDetector.detect_face_limited()). This way I've reached accuracy 97,5%. Next I used pre-detected lfw_landmarks obtained from SphereFace project (https://github.com/clcarwin/sphereface_pytorch/tree/master/data) without running MTCNN detector. And this allowed me to reach accuracy 99,7%.
So, I guess, you've used also pre-detected landmarks.
The question is how have you obtained those landmarks? Which MTCNN detector have you used (there are many implementations of it)? And which detector parameters have you used?
I would really like to replicate reported accuracy on LFW without using any pre-detections but running the whole pipeline.
Thanks