Open ofishhaha opened 6 years ago
I don't really know for sure what is happening in that case, but it sounds a little like the problem is too hard to be learned. In the first case it seems like the network simply reduces the loss by pushing everything so far away that the margin no longer is an issue in the loss and in the second case you make this explicitly impossible, the network just gets stuck.
Is this on Market-1501 or another dataset?
Note that we had a critical typo in the equation of the "LG" loss in the first version of our paper! (Our implementation for experiments was correct.) Only the last version (v4, uploaded 21 Nov 2017) has the correct formula. Make sure that you have implemented it that way.
Also, we have implemented neither lifted, nor cosine distance in this code base, so both of these are your own implementation, and without seeing code, it could very well be that you have some bugs in your implementation.
Sorry it's not the key point of your paper. After succesfully getting result of your batch hard loss function. I tried to check the result of lifted loss. When I use the euclidean distance, the distance between negative pairs tends to keep increasing. Finally the distance between negative pairs is much larger than the margin. While the distance between positive pairs is not small enough. When I use cosine similarity, I set the range of distance between 0 to 1. The problem I met is the final loss keeps at a high level. I tried to change the learning rate and momentum. But it didn't work. I didn't use any hard mining process.
Do you have any idea which part of the settings is wrong? Best wishes.