Cysu / open-reid

Open source person re-identification library in python
https://cysu.github.io/open-reid/
MIT License
1.34k stars 349 forks source link

Epoch autocounting, kissme and precision #30

Closed voa18105 closed 6 years ago

voa18105 commented 6 years ago

Hello! I have a few questions, may be you could tell me a simple solution and I wouldnt have to invent a bike...

  1. Is there a simple way to tell the code to count amount of epochs itself, based on amount of iterations passed?
  2. In my results kissme usually loose comparing with euclidean, is that alright? I was expecting that kissme should flawlessly dominate over euclidean metrics.
  3. After training with multiple datasets (from your list) the precision never improves, but rather drops to 30-40%. Is that alright? I was expecting that if to use more variable data than it should benefit precision, but it appears to get spoiled...

If you have any answers I'll be happy to get them as a help in my research. Also if you have any suggestions how to train it for maximal precision - it would be just great (I've seen your examples, but cannot get precision higher that 80% - I have only one gtx 1060 and cannot repeat experiments with batch = 256)

Cysu commented 6 years ago
  1. Why not using the epoch directly?
  2. KISSME could be worse than Euclidean sometimes, especially with deep learning features. IMO CNN itself learns linear metric implicitly. So traditional metric learning might not be helpful in such case.
  3. How did you use and split multiple datasets? Is test subset the same with single dataset training? If you simply mix the all the datasets together for evaluation, there will be much more gallery images, making the retrieval much more difficult.
voa18105 commented 6 years ago
  1. Because of diffeence in datasets. If I am training with viper or dukemtmc - 100 epochs is far not the same amount of iterations. And I cannot really compare a trained network in similar conditions. I never know in advance how many epochs I need, unless I check amount of ids and images for ids, and count amount of iterations... well... whatever, not a serious problem
  2. I trained with one dataset, after with another one... and thus performed several iterations decreasing learning rate. I understand that merging datasets in advance would bring more benefits, but my idea was to check the fine-tuning ability, while using a model pre-trained with a different dataset
Rizhiy commented 6 years ago
  1. One epoch is defined as one pass over the whole dataset, so it doesn't really apply here. You can just make a global iteration counter and use that.
  2. You precision really depends on which dataset you use for the test. Unfortunately, all current datasets are not large enough to transfer to others properly.