Closed jodevak closed 6 years ago
I'm not sure what you are trying to do and what code you are using so some context would be helpful.
The batch hard triplet loss in this repo works for batches with multiple images for each class.
For instance you would need to have a batch size of 100, which 10 images from each class.
In your case your triplets are already formed so you need to change the triplet selection process, not the triplet loss.
For the batch hard loss, you need to decrease the learning rate.
With the default learning rate of
1e-3
, the embeddings collapse to a single point so the loss is equal to the margin (0.5
).