YyzHarry / imbalanced-semi-self

[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning
https://arxiv.org/abs/2006.07529
MIT License
736 stars 116 forks source link

About the method #1

Closed mingliangzhang2018 closed 4 years ago

mingliangzhang2018 commented 4 years ago

Thank you for sharing your interesting work. Would you mind clarify that what is the method of "CE(Balanced)" ?

YyzHarry commented 4 years ago

Hi, thanks for your interest! "CE(Balanced)" means CE with class-balanced sampling.

mingliangzhang2018 commented 4 years ago

Hi, thanks for your interest! "CE(Balanced)" means CE with class-balanced sampling.

Thanks!

mingliangzhang2018 commented 4 years ago

I still have some questions. At first, why not apply models of ResNeXt same as the paper "DECOUPLING"? Secondly, I find that you use small batch size (100 or 128) in the models training with SSP for the datasets of Imagenet-LT and iNat2018. Do you test the performance of all baseline methods without SSP using same batch size? In other words, whether or not the performance improvement mainly comes from the adjustment of the batch size but less from SSP?

YyzHarry commented 4 years ago
mingliangzhang2018 commented 4 years ago
  • ResNeXt architecture: Simply because we have limited computation resources. As we are comparing to many methods rather than only decoupling method, we just choose 1-2 representative architectures. I believe similar results should hold for different architectures.
  • Batch size: For ImageNet-LT I'm using 128 for all models (and 100 for iNat), as these are the maximum size my GPU can hold. Note that both the baselines and models with SSP are using the same training setups, despite the latter one loading a pre-trained model.

thank you very much!