I try to use my own dataset ( with the same shape as mnist) in code. After some iterations, it is found that the training loss became NAN. After carefully checking the code, I found that the following code may trigger NAN in loss:
If y_conv contains 0 (output of softmax ), the result of tf.log(y_conv) is inf because log(0) is illegal . And this may cause the result of loss to become NAN.
It could be fixed by making the following changes:
Hello~
Thank you very much for sharing the code!
I try to use my own dataset ( with the same shape as mnist) in code. After some iterations, it is found that the training loss became NAN. After carefully checking the code, I found that the following code may trigger NAN in loss:
In
CNN-RelationExtraction/CNN.py: line 104
If y_conv contains 0 (output of softmax ), the result of tf.log(y_conv) is inf because log(0) is illegal . And this may cause the result of loss to become NAN.
It could be fixed by making the following changes:
or
Hope to hear from you ~
Thanks in advance! : )