jonbarron / robust_loss_pytorch

A pytorch port of google-research/google-research/robust_loss/
Apache License 2.0
656 stars 88 forks source link

Nan occurs in backward loss_otherwise #21

Open ChristophReich1996 opened 3 years ago

ChristophReich1996 commented 3 years ago

Hi, I encounter a weird nan error in general.py during training after multiple epochs. Any idea why this error occurs or how to fix it?

Nan_ Error message of torch.autograd.detect_anomaly().

Cheers and many thanks in advance Christoph

jonbarron commented 3 years ago

Hard to say without more info, but my guess at the most likely cause is 1) the input residual to the loss being extremely large (in which case clipping it should work) or NaN itself, or 2) alpha or scale becoming extremely large or small, in which case you probably want to manually constrain the range of values they take using the module interface.