michuanhaohao / reid-strong-baseline

Bag of Tricks and A Strong Baseline for Deep Person Re-identification
http://openaccess.thecvf.com/content_CVPRW_2019/papers/TRMTMCT/Luo_Bag_of_Tricks_and_a_Strong_Baseline_for_Deep_Person_CVPRW_2019_paper.pdf
MIT License
2.26k stars 576 forks source link

How about Person ReID using Example Weighting (OSM&CAA, RLL, ICE, IMAE, DM)? #138

Open XinshaoAmosWang opened 4 years ago

XinshaoAmosWang commented 4 years ago

Dear ReID experts:

If possible, I sincerely recommend you trying our example weighting methods. The reasons: (1) ReID datasets may have noisy observations or labels, and sample imbalance. I find that example weighting is a good approach for addressing these challenges. (2) We can discuss and work together to make it if there is a chance. (3) I am not an expert in ReID, which makes it harder for me to make it alone.

  1. ReID using RLL: https://github.com/Qidian213/Ranked_Person_ReID

  2. ReID using OSM and CAA: Deep Metric Learning by Online Soft Mining and Class-Aware Attention.

  3. ReID on MARS using IMAE: https://github.com/XinshaoAmosWang/Improving-Mean-Absolute-Error-against-CCE

  4. ReID on MARS using DM: https://github.com/XinshaoAmosWang/DerivativeManipulation

Related Papers: OSM and CAA: https://arxiv.org/abs/1811.01459 (Robust metric learning & classification) RLL: https://arxiv.org/abs/1903.03238 (metric learning) IMAE: https://arxiv.org/pdf/1903.12141.pdf (robust classification) DM: https://arxiv.org/pdf/1905.11233.pdf (robust classification and general example weighting)

Hi @michuanhaohao I put this post here for open discussion and collaboration. If it is not okay for me, please let me know. Otherwise, I appreciate it greatly. Thanks.

michuanhaohao commented 4 years ago

Ok, I have not tried these loss functions. Others can discuss them here.

Xiangyu-CAS commented 4 years ago

@XinshaoAmosWang Hi, Did you mean weighted mining method in metric learning? The default setting in this project is "batch hard triplet loss", I tried a lot works aimed to figure out a proper way to dig out hard example, such as "batch sample(weighted) triplet loss", "multi similarity loss", "Cross Batch Memory". However, it turned out worse than "batch hard triplet"

XinshaoAmosWang commented 4 years ago

@Xiangyu-CAS Thanks for your question.

  1. I had two papers on ReID : ID-aware Quality for Set-based Person Re-identification https://arxiv.org/pdf/1911.09143.pdf (this was done earlier than OSM and CAA) and Deep Metric Learning by Online Soft Mining and Class-Aware Attention https://arxiv.org/pdf/1811.01459.pdf

They only use example weighting (continuous weight score) without mining (binary weight score), and were demontrated to be better than ""batch hard triplet"", by which I guess you mean "In defense of triplet loss, 2017"

  1. In our Ranked list loss https://arxiv.org/abs/1903.03238, we first remove those examples which fulfill our optimisation objective (mining stage), then use example weighting (continuous weight score) for the remained data pairs.

  2. In our Instance Cross Entropy https://arxiv.org/pdf/1911.09976.pdf, we model the metric learning from the perspective of instance matching distribution, we also analyse the importance of example weighting and why similarity scaling is necessary. We do not apply mining in this work.

  3. I guess you noticed that CCE (softmax + categorical cross entropy) is popular for learning discriminative representations as well. Therefore, I recommend you try IMAE: https://arxiv.org/pdf/1903.12141.pdf (robust classification) DM: https://arxiv.org/pdf/1905.11233.pdf (robust classification and general example weighting)

In IMAE and DM, I have done experiments on video person re-identification dataset, MARS, and demonstrated they work well.

Thanks again. And please feel free to contact and discuss further.

itsnamgyu commented 3 years ago

@XinshaoAmosWang Hi, I've just read your paper on soft mining and I think both the OSM and CAA aspects address important points in metric learning. I'd be very interested in trying to re-implement it in this repo for ReID and evaluate it on reID datasets. Has there been any work done on this? (on top of https://github.com/ppriyank/-Online-Soft-Mining-and-Class-Aware-Attention-Pytorch)

XinshaoAmosWang commented 3 years ago

@itsnamgyu , I am unsure about this. But it sounds interesting to do a search and try to re-implement all related methods together in one repository.

If you have any difficulties in re-implementing those methods listed above, please feel free to contact me using xinshaowang@gmail.com.

BTW, recently, we have a new piece of work about label correction (label smoothing, regularisation or robust deep learning), I hope you will like it: https://arxiv.org/abs/2005.03788; https://xinshaoamoswang.github.io/blogs/2020-06-07-Progressive-self-label-correction/

Thanks for your interest. I look forward to further discussion with you.