I have test the function 'batch_all_triplet_loss'. It is equivalent to tfa.losses.triplet_hard_loss, But you mentioned
'There is an existing implementation of triplet loss with semi-hard online mining in TensorFlow: tf.contrib.losses.metric_learning.triplet_semihard_loss. Here we will not follow this implementation and start from scratch.'
what is the difference between tf.contrib.losses.metric_learning.triplet_semihard_loss and batch_all_triplet_loss ? If I just calculate the semi-hard triplet, I cant get the same result with tfa.losses.triplet_semihard_loss?Besides, why should I to optimize only the semi-hard triplet but not included the hard triplet in my loss function, I think the loss functiontfa.losses.triplet_semihard_loss is very strange.
I have test the function 'batch_all_triplet_loss'. It is equivalent to
tfa.losses.triplet_hard_loss
, But you mentioned 'There is an existing implementation of triplet loss with semi-hard online mining in TensorFlow: tf.contrib.losses.metric_learning.triplet_semihard_loss. Here we will not follow this implementation and start from scratch.' what is the difference between tf.contrib.losses.metric_learning.triplet_semihard_loss and batch_all_triplet_loss ? If I just calculate the semi-hard triplet, I cant get the same result withtfa.losses.triplet_semihard_loss
?Besides, why should I to optimize only the semi-hard triplet but not included the hard triplet in my loss function, I think the loss functiontfa.losses.triplet_semihard_loss
is very strange.