lsdefine / attention-is-all-you-need-keras

A Keras+TensorFlow Implementation of the Transformer: Attention Is All You Need
708 stars 188 forks source link

K.mean() in computing loss doesn't make any sense. #20

Open mayurnewase opened 5 years ago

mayurnewase commented 5 years ago

In

def get_loss(args):
            y_pred, y_true = args
            y_true = tf.cast(y_true, 'int32')
            loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
            mask = tf.cast(tf.not_equal(y_true, 0), 'float32')
            loss = tf.reduce_sum(loss * mask, -1) / tf.reduce_sum(mask, -1)
            loss = K.mean(loss)
            return loss

loss = tf.reduce_sum(loss * mask, -1) / tf.reduce_sum(mask, -1) produce single element, it's mean doesn't make difference.