Open markus-hinsche opened 3 years ago
This is maybe because of https://github.com/aamini/evidential-deep-learning/blob/7a22a2c8f35f5a2ec18fd37068b747935ff85376/evidential_deep_learning/losses/continuous.py#L35 , where the log is not safe.
So hou
This is maybe because of
, where the log is not safe.
I have met the same problem, could you tell me how to solve it?
For a regression task, I am using a mid-size CNN consisting of Conv and MaxPool layers in the first layers and Dense layers in the last layers.
This is how I integrate the evidential loss (Before I used MSE loss):
This is how I integrated the layer DenseNormalGamma:
Here is the issue I am facing:
0.0007=7e-4
as a learning rate that worked well.7e-7
) I get loss=NaN, mostly already in the very first epoch of training7e-9
) I don't get NaN but of course the network is not learning fast enoughIs there any obvious mistake I make? Any thoughts and help appreciated