MarvinTeichmann / tensorflow-fcn

An Implementation of Fully Convolutional Networks in Tensorflow.
MIT License
1.1k stars 433 forks source link

Error in loss function #18

Closed ymzhang1919 closed 7 years ago

ymzhang1919 commented 7 years ago
   epsilon = tf.constant(value=1e-4)     
   logits = logits + epsilon
   softmax = tf.nn.softmax(logits)

It should be epsilon = tf.constant(value=1e-4)
softmax = tf.nn.softmax(logits) + epsilon

MarvinTeichmann commented 7 years ago

Why? The purpose of the epsilon is to avoid numerical instability.

ymzhang1919 commented 7 years ago

I understand the purpose, but I don't understand how it works. Logits can be big negative numbers. How can you improve the numerical stability of the softmax() operation by adding a small positive number to logits?

On the other hand, adding a small positive number to softmax makes the log() operation more robust.

If I am wrong, can you explain it in detail? Thx.

MarvinTeichmann commented 7 years ago

You are right, I have fixed it.