Vanint / SADE-AgnosticLT

This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).
MIT License
143 stars 20 forks source link

Doubts regarding the experimental setup #7

Closed rahulvigneswaran closed 2 years ago

rahulvigneswaran commented 2 years ago
  1. For CIFAR100-LT a. Are there different val and test set? b. On what dataset split do you choose the best-trained model? c. What split do you use for hyperparam tuning?

  2. For iNaturalist18 a. Are there different val and test set? b. On what dataset split do you choose the best-trained model? c. What split do you use for hyperparam tuning? d. Even though there is an officially available test set (https://github.com/visipedia/inat_comp/tree/master/2018#Data) for iNaturalist18, why don't you use that?

  3. General doubts a. What seeds do you use? b. Do you take a mean of multiple seeds?

Vanint commented 2 years ago

Hi, thanks for your interest! For the two dataset settings (CIFAR and iNat-18), in order to compare with previous methods fairly, we follow the common practice in previous papers [1,2,3], where the validation and test sets are the same. More specifically, CIFAR-LT uses the standard validation set of CIFAR and please refer to the data txt files for iNat-18, which are consistent with previous papers [1,2,3]. Like these methods, we kept the validation (i.e., test) set unchanged and obtained the best model on these two datasets. Note that, in the ImageNet-LT and Places-LT, we have independent validation sets for hyper-parameter tuning. Furthermore, we did not set a specific seed, similar to RIDE [3], in our experiments, as we found the training and performance of our method are stable.

[1] Decoupling Representation and Classifier for Long-Tailed Recognition. ICLR, 2020. [2] Disentangling Label Distribution for Long-tailed Visual Recognition. CVPR, 2021. [3] Long-tailed Recognition by Routing Diverse Distribution-Aware Experts. ICLR, 2021.

rahulvigneswaran commented 2 years ago

Do you use the model that you get at the end of the training or choose the model based on the validation set?

Vanint commented 2 years ago

It depends. In our experiments, on iNat-18, we choose the model at the end of the training for comparison. On CIFAR-LT, following previous methods, we use the saved model (based on validation set) for comparison. In fact, according to my experience, the performance does not change too much for the last model and the best model in our method and experiments.

Vanint commented 2 years ago

Thanks for your questions again. If any further questions, please let me know.