Vanint / SADE-AgnosticLT

This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).
MIT License
146 stars 20 forks source link

About Backbone #3

Closed oldfemalepig closed 2 years ago

oldfemalepig commented 2 years ago

Hi, Thank you very much for your work. I would like to ask if you tried to use ResNeXt101-32x4d instead of ResNeXt50 in your experiments. After my experiments, ResNeXt101 is not as effective as ResNeXt50. Are there any other parameters that need to be changed besides the backbone?

Best,

Vanint commented 2 years ago

Hi, thanks for your attention. Yes, I have tried ResNeXt101 on ImageNet-LT, which performs higher than ResNeXt50. What is your performance on this backbone?

oldfemalepig commented 2 years ago

Hi, thanks for your attention. Yes, I have tried ResNeXt101 on ImageNet-LT, which performs higher than ResNeXt50. What is your performance on this backbone?

It reached 57.18 based on ResNetXt101 with epochs:200、batchsize: 128、lr:0.025. Should I tweak the epochs and lr further? Can I ask you how your parameters are set?

Vanint commented 2 years ago

Hi, thanks for your attention. Yes, I have tried ResNeXt101 on ImageNet-LT, which performs higher than ResNeXt50. What is your performance on this backbone?

It reached 57.18 based on ResNetXt101 with epochs:200、batchsize: 128、lr:0.025. Should I tweak the epochs and lr further? Can I ask you how your parameters are set?

You can try batch size 64, epoch 180, lr 0.025. If I remember correctly, such a setting achieves about 59.2.