zakajd / huawei2020

Solution of Huawei Digix Global AI Challenge
1 stars 0 forks source link

Potential bug in the MAP@R metric #3

Closed amirassov closed 2 years ago

amirassov commented 2 years ago

Hello!

I tried to adapt MAP@R metric for my task and found out that your implementation has a bug. I just took examples from the original paper and put into your function:

Screenshot 2021-09-20 at 12 00 53
import torch
import numpy as np

conformity_matrix = torch.tensor([[True for _ in range(10)] + [False for _ in range(10)]])

permutation_matrix = torch.tensor([[0, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1, 2, 3, 4, 5, 6, 7, 8, 9]])
np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 0.1) <= 1e-6
# False

permutation_matrix = torch.tensor([[0, 10, 11, 12, 13, 14, 15, 16, 17, 1, 18, 19, 2, 3, 4, 5, 6, 7, 8, 9]])
np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 0.12) <= 1e-6
# False

permutation_matrix = torch.tensor([[0, 1, 11, 12, 13, 14, 15, 16, 17, 18, 19, 10, 2, 3, 4, 5, 6, 7, 8, 9]])
np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 0.2) <= 1e-6
# False

permutation_matrix = torch.arange(20).reshape(1, 20)
np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 1.0) <= 1e-6
# False

All tests pass correctly if we change line https://github.com/zakajd/huawei2020/blob/master/src/callbacks.py#L91 by:

average_precision = precision.sum(dim=-1) / R
zakajd commented 2 years ago

Hi, Thanks for noticing! Fixed