Closed ghost closed 4 years ago
@Holmeyoung thanks you so much
Before pytorch v1.2.0, there is something wrong with ctcloss, the gradients will become nan after several epoch. So, to fix this, i have to "replace all nan/inf in gradients to zero". But they fixed it after v1.2.0, and i just keeped this setting.
got it, thanks.
@Holmeyoung dealwith_lossnan = False # whether to replace all nan/inf in gradients to zero In the params.py why you set dealwith_lossnan = False? to handle the problem "Just don't know why, but when i train the net, the loss always become nan after several epoch." should dealwith_lossnan be set as True?