jiawei-ren / BalancedMetaSoftmax-Classification

[NeurIPS 2020] Balanced Meta-Softmax for Long-Tailed Visual Recognition
https://github.com/jiawei-ren/BalancedMetaSoftmax
Other
135 stars 26 forks source link

Question about many shot acc and Evaluation_accuracy_micro_top1 #8

Closed e96031413 closed 3 years ago

e96031413 commented 3 years ago

Hello, I am curious about the difference between Many_shot_accuracy_top1 and Evaluation_accuracy_micro_top1 in the testing phase.

If I train with a dataset that has 6 class, and each class contains over 100 images, there will only Many_shot_accuracy_top1 in the testing output.

My question is "why the value of Many_shot_accuracy_top1 and Evaluation_accuracy_micro_top1 are different ?" Should't they be the same, since all the class contains over 100 images? or I misunderstand these 2 metrics?

jiawei-ren commented 3 years ago

Sorry for the late reply. Many_shot_accuracy_top1 averages over classes while Evaluation_accuracy_micro_top1 averages over images. They will have different values if labels are not uniformly distributed in the test set.

e96031413 commented 3 years ago

@jiawei-ren Thanks for your reply.