Closed Rayzlx closed 3 years ago
Hi, thanks for your interest. The techniques are indeed designed in the context of supervised learning --- we have labels, but they are imbalanced. To apply:
For more details please refer to our paper.
Hi,
Thanks for your great work! I am wondering that in the self-supervised setting, in the step (2) of supervised training, do you fix the backbone parameter (training on the step (1)) about representation learning? Or you fine-tune all the parameters of the whole network? Thanks!!
Hi @YuemingJin - thanks for your interest. We did not fix the backbone parameter during the supervised training stage. Different from linear evaluation protocol in self-supervised learning, we here want to maximize the performance for an imbalanced learning task, hence we do not fix the parameters.
Thanks for sharing the codes! Hope you can answer my question!