Closed amirassov closed 3 years ago
Hello!
I tried to adapt MAP@R metric for my task and found out that your implementation has a bug. I just took examples from the original paper and put into your function:
MAP@R
import torch import numpy as np conformity_matrix = torch.tensor([[True for _ in range(10)] + [False for _ in range(10)]]) permutation_matrix = torch.tensor([[0, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1, 2, 3, 4, 5, 6, 7, 8, 9]]) np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 0.1) <= 1e-6 # False permutation_matrix = torch.tensor([[0, 10, 11, 12, 13, 14, 15, 16, 17, 1, 18, 19, 2, 3, 4, 5, 6, 7, 8, 9]]) np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 0.12) <= 1e-6 # False permutation_matrix = torch.tensor([[0, 1, 11, 12, 13, 14, 15, 16, 17, 18, 19, 10, 2, 3, 4, 5, 6, 7, 8, 9]]) np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 0.2) <= 1e-6 # False permutation_matrix = torch.arange(20).reshape(1, 20) np.abs(map_at_k(permutation_matrix, conformity_matrix, topk=None) - 1.0) <= 1e-6 # False
All tests pass correctly if we change line https://github.com/zakajd/huawei2020/blob/master/src/callbacks.py#L91 by:
average_precision = precision.sum(dim=-1) / R
Hi, Thanks for noticing! Fixed
Hello!
I tried to adapt
MAP@R
metric for my task and found out that your implementation has a bug. I just took examples from the original paper and put into your function:All tests pass correctly if we change line https://github.com/zakajd/huawei2020/blob/master/src/callbacks.py#L91 by: