Wu0409 / DuPL

[CVPR'24] DuPL: Dual Student with Trustworthy Progressive Learning for Robust Weakly Supervised Semantic Segmentation.
66 stars 3 forks source link

Questions about discrepancy loss #3

Closed whb919168848 closed 5 months ago

whb919168848 commented 5 months ago

Thank you very much for your previous answer. I found that the discrepancy loss in code is diffrent from which in your paper.Which one is right?

whb919168848 commented 5 months ago

@Wu0409

Wu0409 commented 5 months ago

Thank you for your attention. Both implementations are ok :)

When we released this repository, we tested the direct use of 1 + cos(a, b) and found that it achieved very close performance. For simplicity, we choose 1 + cos(a, b) as the final implementation of sim_loss.

In fact, the goal of Equation (2) in the paper is quite similar to 1 + cos(a, b) (except for the penalty associated with using the log operator).

Wu0409 commented 5 months ago

no further updates.