CompVis / metric-learning-divide-and-conquer

Source code for the paper "Divide and Conquer the Embedding Space for Metric Learning", CVPR 2019
GNU Lesser General Public License v3.0
263 stars 44 forks source link

is the algorithm of calculating recall@k metrics correct? #10

Open JasonKll opened 4 years ago

JasonKll commented 4 years ago

https://github.com/CompVis/metric-learning-divide-and-conquer/blob/1766c2cffe1075692657898d2086af4bc9d92929/lib/evaluation/recall.py#L9

Hi, is the code above, which calculates recal@k metrics correct? it looks like top-k accuracy to me (when we add 1 to result sum, if we find at least one image in retrieval set which is from the same class as query image). And Recall@k is =(# of recommended items @k that are relevant) / (total # of relevant items), like in this article: https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54

Thanks, Jason

SidraHanif180 commented 4 years ago

Hi,

I am wondering that the recall calculation seems to be incorrect as it only look for at-least one correct retrieval. The correct way to calculate recall can be found at the link below : https://github.com/littleredxh/DREML/blob/master/_code/Utils.py

Please see this part of a code:

for r in rank: A = 0 for i in range(r): imgPre = imgLab[idx[:,i]] A += (imgPre==imgLab).float() acc_list.append((torch.sum((A>0).float())/N).item())

So, we should compare predicted (imgPre) and True label(imgLab) for the retrieved images and divide it by total number of images (N) to calculate recall.