Closed ZeroSteven618 closed 3 years ago
Hi, I am not sure what you mean. The loss equation you posted is triplet no? Hinge loss is also defined per-class. So I am not sure what you mean.
Sorry for replying to you so late. The equation is from your paper and you named it as: hinge embedding loss
I used to treat it as a different loss from Triplet loss because: In your code from losses.py you made loss_desc_pair(d1, d2), loss_desc_non_pair(d1, d3, margin, d2=None) which looks the same to the equation I posted from your paper.
Then you made loss_desc_triplet(d1, d2, d3, margin, squared_loss=False, mine_negative=False) function calls, which is used by default in config.py as config.use_triplet_loss=True.
I regarded them as 2 different loss kinds or the DESC network. I thought you changed the loss implementation in the paper due to later experiments results but I am not sure so I came for answer.
Honestly, it's been three+ years since this code was written. I am not sure either. It is highly likely what you suggest is true. If my memory serves me correctly, the models that we released for tf-lift were performing similar to the original lift code base.
Excuse me, I come again for the implementation of LIFT code which seems different from the way you mentioned in the paper.
In the DESC part, you designed the loss function as :
Then in your code, you set config.use_triplet_loss as True by default. Then you did the losses implementation both of hinge loss and triplet loss in losses.py. You set the triplet loss by default. Did the triplet loss in DESC work better than the hinge loss in your previous experiment before the repo published?
I tested both but the result seems unchanged, but due to my data amount the training model result was poor. So I came for the question.