Closed kadirbeytorun closed 5 years ago
The inference code was from the original repo, I have thought about this in the beginning and I found out it's the same, consider the output embeddings are all normalized to 1
Hey thanks for the reply,
Makes sense, when cosine distance gets smaller, euclidean distance will decrease too. But, since cosine distance was used while training the model, I think using it during inference also should give better results than euclidean distance.
I have tried it with some examples of my own mathematically and observed euclidean distance and cosine distance is very close to each other when samples are L2 normalized. Yet there is still %4-5 difference.
Hi thanks for the answer going over the code again, how do you make sure that the embeddings are normalized? thanks.
Hey, I have a question regarding the euclidean distance during inference. While the model was trained on arcface loss function, why do you use euclidean distance after you get the embeddings? Shouldn't we use arcface during inference also, since model was trained on it and learnt to optimize cosine distance?
Or am I missing something here?
Regards (btw I love how you put comments all around your code, keep up the good work)