Open mohamedamara7 opened 1 year ago
For reading image and compare, you can refer to colab efficientnetV2_basic_test.ipynb Keras insightface test
section. Technically, Just following the steps face detection in image
--> face align
--> normalize to [-1, 1]
--> input to model and get embedding output
--> l2 normalize embedding
--> calculate cosine distance is enough
. Processes in video_test.py#L90 is following this.
iaa = np.zeros([1, 112, 112, 3]) # Face image 1
ibb = np.ones([1, 112, 112, 3]) # Face image 2
mm = keras.models.load_model('model_path.h5') # load pretrained basic_model
eaa = mm(iaa) # face 1 embedding
ebb = mm(ibb) # face 2 embedding
print(eaa.shape, ebb.shape)
# (1, 256) (1, 256)
from sklearn.preprocessing import normalize
eea = normalize(eaa) # normalize
eeb = normalize(ebb)
print((eea ** 2).sum(), (eeb ** 2).sum())
# 1.0 1.0
print(np.dot(eea, eeb.T)) # cos similarity
# [[0.38431713]]
How I can know the best threshold of your trained models for face verification? Is there even a pseudocode for reading two faces using cv2 and compare their similarity?