Closed threeStoneLei closed 2 months ago
I have a question about the sign of the second term in the RCL loss by the author. In-class samples impose feature divergence, shouldn't the sign be positive? If the sign is negative as in the paper, it encourages intra-class sample pairs whose similarity exceeds the threshold.
Thanks for your interest. We are currently in the process and will release the code by the end of this month (at last before conference). Sorry for the delay due to some cleansing and authorization.
\log \Big( { \sum_{\substack{{\mathbf{x}_k\in \mathcal{P}(\mathbf{x}_i)}}} \exp\Big( {\langle \phi(\mathbf{x}_i), \phi(\mathbf{x}_k)\rangle}/{\tau} \Big)} + \exp (1/{\tau}) \Big)
Can the author open source the code? Thank you. I found that in the RCL loss in the paper, when β is greater than 1, the loss part that limits the intra-class distance is too large. Because the cosine similarity is greater than 0.7 for sample pairs, the cosine similarity is scaled to 14. Then perform exponential calculation and logarithmic calculation to find that the loss value is at least 14. Compared with the loss values of other items, this item is too large