jonbarron / robust_loss_pytorch

A pytorch port of google-research/google-research/robust_loss/
Apache License 2.0
656 stars 88 forks source link

Did the weight_decay needed? #24

Open wzn0828 opened 3 years ago

wzn0828 commented 3 years ago

Hi, thanks for your wanderful work.

As I use your AdaptiveLossFunction, I found the alpha did not decrease, it keeps the highest value through the training process.

So, I used the weight_decay to the alpha and the scale. However, I think the weight_decay should not be used for the two parameters.

What' your opinion?

jonbarron commented 3 years ago

If the alpha value stays large throughout optimization, it sounds like your data doesn't have very many outliers, in which case you'll probably get optimal performance by just allowing alpha to be large. Regularizing alpha to be small does not make much sense to me unless you have a prior belief on the outlier distribution of your data. If you want to control the shape of the loss function, I'd just use the general formulation in general.py, and set alpha to whatever value you want.

wzn0828 commented 3 years ago

Ok, your answer addresses my mystery. Thank you very much.