YyzHarry / imbalanced-semi-self

[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning
https://arxiv.org/abs/2006.07529
MIT License
735 stars 115 forks source link

Some problems about the assumption in the papaer. #14

Closed jiequancui closed 3 years ago

jiequancui commented 3 years ago

Hi, I'm very interested in your paper. Especially, the proofs attract me. However, I meet some questions on understanding the proof.

"We assume a properly designed black-box self-supervised task so that the learned representation is Z = k1 ||X||^{2} + k2, where k1, k2 > 0. Precisely, this means that we have access to the new features Zi for the i-th data after the black-box self-supervised step, without knowing explicitly what the transformation ψ is. "

I'm confused by the following questions: (1) Why a properly designed black-box self-supervised task can obtain the learned representation, Z = k1 ||X||^{2} + k2 ? whether the moco or rotation-based self-supervised method respect this assumption?

(2) Why the supervised classification task can not obtain the similar representation, Z = k1 ||X||^{2} + k2 ?

YyzHarry commented 3 years ago

Thank you for your interest in our work.

For your question, we provide some simple theoretical models to motivate the development of our techniques. In this case, note that we are not assuming that the assumption holds for all self-supervised learning methods, rather, we are considering a setting where such a representation (e.g., affine transformation of l_2 norm) is learnt via a well-designed self-supervised learning method. In other words, we are considering a specific and simple setting to provide some insights. Since this representation is not complicated, we believe this is not a strong assumption for a motivating example. Of course, a full account that takes existing self-supervised learning techniques into consideration would be interesting. This is however very complicated. It is certainly beyond the scope of the current paper but could be interested in its own right.

Hope this clarifies your understanding!

jiequancui commented 3 years ago

Thank you for your kind reply. I got it. Very interesting paper!