by-liu / CALS

Code for our method CALS (Class Adaptive Label Smoothing) for network calibration. To Appear at CVPR 2023. Paper: https://arxiv.org/abs/2211.15088
MIT License
21 stars 1 forks source link

Question about accuracy on ImageNet-LT #3

Closed JiachengCheng96 closed 1 year ago

JiachengCheng96 commented 1 year ago

Hi, Thanks for releasing your code with great quality! It looks very helpful for my research.

I have a simple question about the accuracy of CE baseline reported on ImageNet-LT. It (~38%) seems to be significantly lower than that of some other papers (e.g., ~44% in [1], ~45% in [2]). Could you kindly provide any hints on what leads to this gap (e.g. maybe different training/test protocol)?

[1] https://github.com/dvlab-research/MiSLAS [2] https://github.com/KaihuaTang/Long-Tailed-Recognition.pytorch

by-liu commented 1 year ago

Hi, @JiachengCheng96

Thank you for your interest in the code and paper, and sorry for my late reply.

In our paper, we focused on comparing the calibration performance of different calibrating losses. Therefore, we skipped the techniques specific designed for long-tailed recognition, like balanced data sampler or balanced losses you mentioned.

I supposed most recent techniques for long-tailed recognition (like balanced data sampler or balanced losses) are orthogonal to our CALS. I tested balanced softmax and the accuracy I obtained without including its Meta Sampler is 41.96% with ECE further decreased to 1.75%. The code is currently in another branch : https://github.com/by-liu/CALS/blob/long/calibrate/losses/balanced_softmax.py. Also, the mixup is already supported in the repo: https://github.com/by-liu/CALS/blob/long/configs/mixup/mixup.yaml.

I will be evaluating with other recent long-tailed recognition techniques and you are welcomed to evaluate them on top of the code if you would like.

All the best