Closed elias-ramzi closed 1 year ago
Hi, I also have some kinds of problem about the impelementation details. Since the DSL loss concerns the within-class similarity from different scales, so what is the batch sampling strategy for this method when training with DSL loss by using a mini-batch gradient descent manner. Apparently sample a batch by random shuffle is not suitable here.
Hi, I have worked with the CSL loss and used the same sampling as in standard image retrieval (e.g. m-per-class sampler). You can find an implementation of the loss here : https://github.com/elias-ramzi/HAPPIER/blob/main/happier/losses/csl_loss.py Hope this helps.
Thanks for your reply and the future work HAPPIER,I will keep learning and focusing on them.
Hi !
I am working with your datasets.
I have some more questions, do you plan on releasing :
Thanks !