Closed royveshovda closed 7 years ago
Provided loss function return NaN. Ended up using loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) to get the lab to work.
That shouldn't happen though. But yes, the direct function you mention is better. I have updated the notebook. Thanks!
Provided loss function return NaN. Ended up using loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) to get the lab to work.