Open janzd opened 4 years ago
You can try increasing the batchsize.
You can try increasing the batchsize.
I increase the batchsize from 4 to 10. the loss still becomes NaN.
Are the default settings of parameters such as learning rate the parameters that you used for training? I cannot train the network for more than about 10 epochs as the loss becomes NaN by then.
Hey you sorted out your issue?
Are the default settings of parameters such as learning rate the parameters that you used for training? I cannot train the network for more than about 10 epochs as the loss becomes NaN by then.