Vanint / SADE-AgnosticLT

This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).
MIT License
146 stars 20 forks source link

About the setting of shared backbone and separate expert #12

Closed zhiyuanyou closed 2 years ago

zhiyuanyou commented 2 years ago

Hi~ Thanks for your excellent work. I have two questions about the paper and the code. (1) I notice that, in default, the shared backbone contains only the layer_1 & layer_2 of resent, other layers in resnet (layer_3 & layer_4) are all in the "expert". Could this setting be described as "shared backbone"? I mean, by saying "shared backbone", the readers will assume that only the classier layer is treated as the "expert". (2) Have you tried the setting that only the classier layer is treated as the "expert"? How much the performance decrease is?

Vanint commented 2 years ago

Hi. Yes, this can be regarded as "shared backbone" but we only share lower layers on ImageNet-LT. When having higher layers in experts, these experts would have different feature spaces, leading to more diverse experts.

Only classifiers in experts would sacrifice performance. However, I forget how much the decrease is, maybe 1 or 2 accuracy on ImageNet-LT (I am not very sure, maybe you can try).

zhiyuanyou commented 2 years ago

Thanks for your response. I will have a try.