Vanint / SADE-AgnosticLT

This repository is the official Pytorch implementation of Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition (NeurIPS 2022).
MIT License
146 stars 20 forks source link

Implementation detail about LDAM loss #5

Closed fliman closed 2 years ago

fliman commented 2 years ago

Hi Vanint, I notice that in your LDAM loss implementation the scale is applied on the the adjustment only

x_m = x - batch_m * self.s 

which is different from the original LDAM loss

 return F.cross_entropy(self.s*output, target, weight=self.weight)

basically equivalent to

x_m = (x - batch_m) * self.s 

could you explained more on this detail? Is the coefficient absorbed somewhere on the logit output?

Vanint commented 2 years ago

Hi, thanks for your attention. In fact, we did not use LDAM loss in our work. The provided LDAM loss is provided by the RIDE (https://github.com/frank-xwang/RIDE-LongTailRecognition/blob/main/model/loss.py). You may wanna leave a question in the repository of RIDE. Thanks.

Vanint commented 2 years ago

If any other questions, happy to discuss them further.