Vanint / SADE-AgnosticLT

This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).
MIT License
146 stars 20 forks source link

Test-Agnostic Long-Tailed Recognition

This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).

1. Results

(1) ImageNet-LT (ResNeXt-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc. Model
Softmax 4.26 48.0
RIDE 6.08 56.3
SADE (ours) 6.08 58.8 Download

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 4.26 66.1 60.3 48.0 34.9 27.6
RIDE 6.08 67.6 64.0 56.3 48.7 44.0
SADE (ours) 6.08 69.4 65.4 58.8 54.5 53.1

(2) CIFAR100-Imbalance ratio 100 (ResNet-32)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 0.07 41.4
RIDE 0.11 48.0
SADE (ours) 0.11 49.8

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 0.07 62.3 56.2 41.4 25.8 17.5
RIDE 0.11 63.0 57.0 48.0 35.4 29.3
SADE (ours) 0.11 65.9 58.3 49.8 43.9 42.4

(3) Places-LT (ResNet-152)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 11.56 31.4
RIDE 13.18 40.3
SADE (ours) 13.18 40.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-50 Forward-10 Uniform Backward-10 Backward-50
Softmax 11.56 45.6 40.2 31.4 23.4 19.4
RIDE 13.18 43.1 41.6 40.3 38.2 36.9
SADE (ours) 13.18 46.4 43.3 40.9 41.4 41.6

(4) iNaturalist 2018 (ResNet-50)

Long-tailed recognition with uniform test class distribution:

Methods MACs(G) Top-1 acc.
Softmax 4.14 64.7
RIDE 5.80 71.8
SADE (ours) 5.80 72.9

Test-agnostic long-tailed recognition:

Methods MACs(G) Forward-3 Forward-2 Uniform Backward-2 Backward-3
Softmax 4.14 65.4 65.5 64.7 64.0 63.4
RIDE 5.80 71.5 71.9 71.8 71.9 71.8
SADE (ours) 5.80 72.3 72.5 72.9 73.5 73.3

2. Requirements

3. Datasets

(1) Four bechmark datasets

data
├── ImageNet_LT
│   ├── test
│   ├── train
│   └── val
├── CIFAR100
│   └── cifar-100-python
├── Place365
│   ├── data_256
│   ├── test_256
│   └── val_256
└── iNaturalist 
    ├── test2018
    └── train_val2018

(2) Txt files

4. Pretrained models

5. Script

(1) ImageNet-LT

Training

Evaluate

Test-time training

(2) CIFAR100-LT

Training

Evaluate

Test-time training

(3) Places-LT

Training

Evaluate

Test-time training

(4) iNaturalist 2018

Training

Evaluate

Test-time training

6. Citation

If you find our work inspiring or use our codebase in your research, please cite our work.

@inproceedings{zhang2022Self,
  title={Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition},
  author={Zhang, Yifan and Hooi, Bryan and Hong, Lanqing and Feng, Jiashi},
  booktitle={Advances in Neural Information Processing Systems},
  year={2022}
}

7. Acknowledgements

This is a project based on this pytorch template.

The mutli-expert framework are based on RIDE. The data generation of agnostic test class distributions takes references from LADE.