Closed zhxj9823 closed 5 years ago
Maybe the v3 is for experiment.
Maybe the v3 is for experiment.
but the v2 has a similar result
@zhxj9823 You can find the method to calculate the similarity of features in ConfusionMatrix_similarity_visualization.py#L55
BTW, I'm tied up at the moment. I will update this repo. when I am free.
@zhxj9823 You can find the method to calculate the similarity of features in ConfusionMatrix_similarity_visualization.py#L55
BTW, I'm tied up at the moment. I will update this repo. when I am free.
I have tried the same method but the accuracy is still low. I wonder if there are some requirements for the images. For example, do images need to be in grayscale or RGB-mode? Do images contains only frontal faces? I tried on LFW datasets, and it performs well, but on my own dataset where there are a lot of side faces, the distances between images of the same person are too big to find a proper threshold.
@zhxj9823 All these cases of dataset of train, face position and quality, input size and color, keypoint and align will affect the recognition results. More detial you can reference insightface.
@becauseofAI Actually, I used insightface first. The accuracy is pretty high on my own testing dataset, but the inference time is relatively long, so I turn to your model, but the accuracy on the same dataset is too low for me.
So it's a trade-off of the speed and accuracy. In fact, the v1 is is suitable the scene for certificate photo, v2 got reasonable accuracy on datasets of lfw, agedb_30 and cfp_fp, and v3 was an extreme try. I will train and test the detection, key points and recognition models based on a same cross-scenario data when I am free, then update the process of all the code. I will not reply before that.
@becauseofAI Thanks for your clarification. The accuracy of mobileface I tested on my dataset is less than 60%, but it can be up to 99% when I use insightface. The accuracy seems somewhat unreasonable here.
Thanks for your great work! I want to use your work to build a face recognition project. So I combine the code of get_face_align.py and get_face_feature_v3_mxnet.py together to build a workflow. I can get extracted features of faces, and then I try to use
dist = np.sum(np.square(f1-f2))
orcos = cosine(f1, f2)
to compare the similarity, but the accuracy is pretty low. Could you give me an example of how to use them?