Closed xinqiaozhao closed 2 years ago
Hi, Thanks for your interest. Have you tried to run RIDE on ImageNet-LT? It seems like I have not included the code for CIFAR100-LT. I'll check the code again.
Regards, Zhi hou
Thank you for your quick reply, I'm running the code of ImageNet-LT, really looking forward to your code on CIFAR100-LT, that will be very helpful, Thank you~ :)
Thanks. I guess I find the issue. Do you get the result only around 0.25? I add a shared classifier for CIFAR100-LT because I find it does not work on CIFAR100-LT without shared classifier, though BatchFormer without shared classifier on ImageLT and iNat works impressively.
Regards,
I do not include BatchFormer for RIDE on CIFAR100-LT. I have updated the code on CIFAR100-LT. You can run that like this,
python train.py -c "./configs/config_imbalance_cifar100_ride.json" --reduce_dimension 1 --num_experts 3 --add_bt 33
Empirically, BatchFormer does not improve CIFAR100-LT on RIDE.
thank you so much!
Thanks. I guess I find the issue. Do you get the result only around 0.25? I add a shared classifier for CIFAR100-LT because I find it does not work on CIFAR100-LT without shared classifier, though BatchFormer without shared classifier on ImageLT and iNat works impressively.
Regards,
yes i get the result around 0.25 yesterday
Thank you so much for sharing the codes, I have met a issue when I try to run RIDE on CIFAR100-LT as your paper report
I use the command like this: python train.py -c "./configs/config_imbalance_cifar100_ride.json" --reduce_dimension 1 --num_experts 3 --add_bt 1
I just follow the val_accuracy printed in each iteration, but the result is so bad, I think I use it in a wrong way, could you give me a hint about how to reproduce the CIFAR100-LT result? Thank you so much and it will help a lot