Closed Margrate closed 4 years ago
Thank you for your reply. When I use normal global average pooling, it can converge normally. Did you compare the effects of Weighted Regularization Triplet (WRT) loss and Batch Hard triplet loss?
If it can converge normally when you use normal global average pooling, then it should also converge normally when you replace the self.p in GeneralizedMeanPooling as a constant 3. Because in this way, the GeneralizedMeanPooling is just a tradeoff between global average pooling and global max pooling. Or you can try whether your model can converge normally with global max pooling?
Weighted Regularization Triplet (WRT) performs a little better than Batch Hard triplet loss in our experiments.
In my experiments, softmax loss + center loss + Batch Hard triplet loss is better than softmax loss+ center loss + Weighted Regularization Triplet (WRT) loss. Maybe my backbone is different from you.
In my experiments, softmax loss + center loss + Batch Hard triplet loss is better than softmax loss+ center loss + Weighted Regularization Triplet (WRT) loss. Maybe my backbone is different from you.
That's normal.
Here are some tips for debugging the nonconvergence problem: