Closed meetps closed 7 years ago
Is there any specific reason why you chose to write a custom loss function instead of directly using this?
# mask_.shape == (batch_size, h, w, n_classes) # y_mask_.shape == (batch_size*h*w, n_classes) mask_ = tf.reshape(mask_, (-1, n_classes)) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=mask_, labels=y_mask))
I am wondering if there is any added advantage by using your loss function.
Yes, I ran into numerical instabilities with the tensorflow loss function. For details also see this issue.
Is there any specific reason why you chose to write a custom loss function instead of directly using this?
I am wondering if there is any added advantage by using your loss function.