Hi- I did setup your pipeline: mtcnn for face detection and then facenet for face recognition (I did test both 20170511-185253.pb and 20170512-110547.pb). But I do not have any classification at the end. I do only perform compare them using cosine and euclidean distances!
I perform the above process in a while loop for 2 minutes in which frames/images are captured using opencv video capture function! Finally, I compare my own image/face during this period of time using cosine and euclidean functions! But the results are not promising at all. Indeed, the error can reach up to 50% sometime among frames! Any idea why is that and how I can improve it?
Do you have any report on your flow? What is your accuracy (just for one single person, but different frames)?
Do I need to do normalization, or rgb2gray before giving my frames to mtcnn?
Hi- I did setup your pipeline: mtcnn for face detection and then facenet for face recognition (I did test both 20170511-185253.pb and 20170512-110547.pb). But I do not have any classification at the end. I do only perform compare them using cosine and euclidean distances!
I perform the above process in a while loop for 2 minutes in which frames/images are captured using opencv video capture function! Finally, I compare my own image/face during this period of time using cosine and euclidean functions! But the results are not promising at all. Indeed, the error can reach up to 50% sometime among frames! Any idea why is that and how I can improve it?
Do you have any report on your flow? What is your accuracy (just for one single person, but different frames)?
Do I need to do normalization, or rgb2gray before giving my frames to mtcnn?
Does mtccn already perform alignment (similae to align_dlib.py in openface based on http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html)?
What about alignment ( 3d alignment such as fb deep face)?
Thanks