CoinCheung / pytorch-loss

label-smooth, amsoftmax, partial-fc, focal-loss, triplet-loss, lovasz-softmax. Maybe useful
MIT License
2.17k stars 374 forks source link

Regularizing Neural Networks by Penalizing Confident Output Distribution #32

Closed eakirtas closed 2 years ago

eakirtas commented 2 years ago

Hello,

First of all, thank you for contributing such a nice repo, integrating such useful loss function in PyTorch.

According to this repo there is an implementation in your repo for Regularizing Neural Networks by Penalizing Confident Output Distribution. However, I can't find the citation of this paper in repo. Is this loss function implemented in the repo?

Thank you in advance!

CoinCheung commented 2 years ago

Hi,

Would you please tell me which loss are you refering to? I cannot remember I have read this paper and implement loss proposed in it.

eakirtas commented 2 years ago

I just wondering if you implemented the proposed loss. I've scanned the repo and I didn't find anything similar to the proposed method. So I am wondering if you just using different naming and I'am missing something or indeed it isn't implemented.

CoinCheung commented 2 years ago

Do I have potential risk of permissions if I would like to implement this(just an assumption) ? I have never considered these things before.

eakirtas commented 2 years ago

I pretty sure that there is no permission issue. Not only there is no permission issue, but I feel that is beneficial for the authors to implement their work since they are gain more visibility and potentially more citations. For ethical reason, we should refer to their paper in order to give them credits (as you already did in other implementations).

Anyway, maybe there is misunderstanding, I just search for an implementation to use it in my work. I am not asking you to include reference somewhere that you didn't (as I said, you already refer to papers that you implement)

So If you are interested to implement it, that will be more than helpful for me, for a lot of people as well for authors! If I implement it my own, I will open a PR to include it in your repo (if you are interested of course)

Thank you again for your awesome work