Alibaba-MIIL / ASL

Official Pytorch Implementation of: "Asymmetric Loss For Multi-Label Classification"(ICCV, 2021) paper
MIT License
732 stars 102 forks source link

Training issue #111

Open snehashis1997 opened 4 months ago

snehashis1997 commented 4 months ago
Dataset | gamma neg | highest map (after 40 epcoh) -- | -- | -- f/m: 460/694 | 4 | 86.69 f/m: 460/694 | 3 | 87.94 f/m: 460/694 | 2 | 87.67 f/m: 460/694 | 5 | 85.66 f/m: 460/694 | 6 | 84.34   |   |   f/m: 464/1500 | 4 | 90.21 f/m: 464/1500 | 3 | 92.95 f/m: 464/1500 | 2 | 90.39 f/m: 464/1500 | 5 | 90.88 f/m: 464/1500 | 6 | 91.23   |   |   f/m: 464/1942 | 4 | 92.51 f/m: 464/1942 | 3 | 92.6 f/m: 464/1942 | 2 | 92.99 f/m: 464/1942 | 5 | 92.9 f/m: 464/1942 | 6 | 90.73
snehashis1997 commented 4 months ago

Here are my findings about ASL training on my custom dataset, where "f/m" defines two different classes and their item numbers. In this setup, I set gamma_pos to 0. You can see that I increased the imbalance steadily, but I could not find a clear relationship between the gamma_neg value and mAP (mean Average Precision).

For the first set of data, where the imbalance is not too high, we get a better mAP with gamma_neg == 3. However, in the third set, where the imbalance is significantly higher, we achieve a better mAP with gamma_neg == 2. I think we should get a better map value at gammaneg == 6. Can you please explain if I am doing something wrong? What are your suggestions?