Closed weilegexiang closed 1 year ago
This may be due to a problem with the backbone that we have overlooked. We follow FixMatch(https://github.com/kekmodel/FixMatch-pytorch) to build our code, which sets the width of the wideresnet to 8 for CIFAR100-LT. However, for most LTSSL methods, the width is set to 2 in this setting. Unfortunately we discovered this problem after the paper was accepted, and for a fair comparison, we update the model width of training CIFAR100-LT to 2 in our code(https://github.com/Gank0078/ACR/blob/main/train.py#L280). In our experiment, our method still achieves an average of 1.96% performance improvement over previous sota methods on the relevant setting of CIFAR100-LT. We will update related results in the paper soon to avoid confusion.
This may be due to a problem with the backbone that we have overlooked. We follow FixMatch(https://github.com/kekmodel/FixMatch-pytorch) to build our code, which sets the width of the wideresnet to 8 for CIFAR100-LT. However, for most LTSSL methods, the width is set to 2 in this setting. Unfortunately we discovered this problem after the paper was accepted, and for a fair comparison, we update the model width of training CIFAR100-LT to 2 in our code(https://github.com/Gank0078/ACR/blob/main/train.py#L280). In our experiment, our method still achieves an average of 1.96% performance improvement over previous sota methods on the relevant setting of CIFAR100-LT. We will update related results in the paper soon to avoid confusion.
Thank you for your reply~
Hi, Sorry to bother you again. I tried to reproduce the results reported in your paper on CIFAR100-LT. Here is my running: python train.py --dataset cifar100 --num-max 50 --num-max-u 400 --arch wideresnet --batch-size 64 --lr 0.03 --seed 0 --imb-ratio-label 20 --imb-ratio-unlabel 20 --ema-u 0.99 --out out/cifar-100/N50_M400/consistent But I only reached an accuracy of around 44.5 (the reported on is 48.0). Is there something wrong of my settings?
Thank you~