Open SongZRui opened 5 years ago
At training time we use the original formulation of contrastive loss, in which Euclidean distance is used to compare two image representations.
At test time, we use dot product or cosine similarity. Because the vectors are L2-normalized, cosine similarity and Euclidian distance will provide the same ranking/ordering. The dot product is faster and simpler to compute than Euclidian distance, that is why we use it at test time.
For more details, take a look at the equations in my PhD Thesis, Appendix A, paragraph 'Cosine similarity'.
Get it. ThX
It seems that you used different criteria during training and testing as the code below shows: IN TEST: scores = np.dot(vecs.T, qvecs) IN TRAIN: dif = x1 - x2 D = torch.pow(dif+eps, 2).sum(dim=0).sqrt()
I did not get it why you do so?