fistyee / MDCS

πŸ”₯MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]
28 stars 1 forks source link
long-tailed-learning long-tailed-recognition

MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023, Paper] πŸ”₯

Qihao Zhao1,2, Chen Jiang1, Wei Hu1, Fan Zhang1, Jun Liu2

1 Beijing University of Chemical Technology

2 Singapore University of Technology and Design

MDCS

0.Citation

If you find our work inspiring or use our codebase in your research, please consider giving a star ⭐ and a citation.

@InProceedings{Zhao_2023_ICCV,
    author    = {Zhao, Qihao and Jiang, Chen and Hu, Wei and Zhang, Fan and Liu, Jun},
    title     = {MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {11597-11608}
}

1.training

(1) CIFAR100-LT

Training

Evaluate

(2) ImageNet-LT

Training

Evaluate

(3) Places-LT

Training

Evaluate

(4) iNaturalist 2018

Training

Evaluate

2. Requirements

3. Datasets

(1) Four bechmark datasets

data
β”œβ”€β”€ ImageNet_LT
β”‚Β Β  β”œβ”€β”€ test
β”‚Β Β  β”œβ”€β”€ train
β”‚Β Β  └── val
β”œβ”€β”€ CIFAR100
β”‚Β Β  └── cifar-100-python
β”œβ”€β”€ Place365
β”‚Β Β  β”œβ”€β”€ data_256
β”‚Β Β  β”œβ”€β”€ test_256
β”‚Β Β  └── val_256
└── iNaturalist 
 Β Β  β”œβ”€β”€ test2018
    └── train_val2018

(2) Txt files

4. Pretrained models

5. Acknowledgements

The mutli-expert framework is based on SADE and [RIDE](). Strong augmentations are based on NCL and PaCo.