KovenYu / MAR

Pytorch code for our CVPR'19 (oral) work: Unsupervised person re-identification by soft multilabel learning
https://kovenyu.com/publication/2019-cvpr-mar/
315 stars 83 forks source link

Some question about the loss and code #27

Closed kakusikun closed 4 years ago

kakusikun commented 4 years ago

Thanks for your work

  1. eq(3) in paper, target data similarity is calculated by cosine, but here is euclidean ? the default metric of pdist is euclidean.
  2. the cosine similarity calculated here is not a common way likes F.normalize in pytorch, why? what's the concern?
  3. the cosine similarity is finally inflated by a scale of 30, why?
  4. eq(6) in paper, the mean and std of soft multilabels is updated by moving average of weight 0.5 as described in supplementary, but here in code uses batch size / 10000, why?
KovenYu commented 4 years ago
  1. when norm=1 cos similarity is equivalent to L2 distance when used as a distance metric (it is a linear transform of L2 distance).
  2. it seems natural to me, though. when norm=1 cos similarity can be simply implemented by matrix multiplication, which is efficient.
  3. basically it is because softmax loss has a bound corresponding to the feature norm. for more details please refer to the Normface paper in the reference.
  4. yeah I think this is a version issue.. thanks for letting me know! I shall update the supp.