skynbe / FedRCL

Official Implementation of FedRCL (CVPR 2024)
17 stars 3 forks source link

an the author open source the code? Thank you. #1

Closed threeStoneLei closed 2 months ago

threeStoneLei commented 4 months ago

Can the author open source the code? Thank you. I found that in the RCL loss in the paper, when β is greater than 1, the loss part that limits the intra-class distance is too large. Because the cosine similarity is greater than 0.7 for sample pairs, the cosine similarity is scaled to 14. Then perform exponential calculation and logarithmic calculation to find that the loss value is at least 14. Compared with the loss values of other items, this item is too large

threeStoneLei commented 4 months ago

I have a question about the sign of the second term in the RCL loss by the author. In-class samples impose feature divergence, shouldn't the sign be positive? If the sign is negative as in the paper, it encourages intra-class sample pairs whose similarity exceeds the threshold.

skynbe commented 4 months ago

Thanks for your interest. We are currently in the process and will release the code by the end of this month (at last before conference). Sorry for the delay due to some cleansing and authorization.

  1. We have presented the RCL loss in its current form for the sake of simplicity and clarity. For implementation, we construct the penalty loss term using the form of contrastive learning, identical to the unsupervised CL loss except that the similarity for the positive pair is set to a constant 1. This helps to align the scale of penalty loss with that of SCL loss.

\log \Big( { \sum_{\substack{{\mathbf{x}_k\in \mathcal{P}(\mathbf{x}_i)}}} \exp\Big( {\langle \phi(\mathbf{x}_i), \phi(\mathbf{x}_k)\rangle}/{\tau} \Big)} + \exp (1/{\tau}) \Big)

  1. The sign of RCL loss is typo. (We have updated in camera ready version but not updated in arxiv.) Thanks for the correction and will update the paper asap.