nwojke / deep_sort

Simple Online Realtime Tracking with a Deep Association Metric
GNU General Public License v3.0
5.19k stars 1.46k forks source link

Difference in feature vector from downloaded mars model and the model trained using Cosine similarity #96

Open aburney123 opened 5 years ago

aburney123 commented 5 years ago

Hi, I used the cosine-similarity code (https://github.com/nwojke/cosine_metric_learning) to train a model from scratch using the mars dataset as specified in the project readme. I stopped the training when at around 90000 iteration and the classification accuracy had reached 1.0 and the the total loss was not decreasing. I used the trained model to compute feature vectors on my test data and then used cosine distance between a given image and a set of images to find similar ones. The result I get is completely different from when I used the model downloaded from here https://owncloud.uni-koblenz.de/owncloud/s/f9JB0Jr7f3zzqs8.

As an example, for an image, the cosine distance between its features and a set of four images is following: Downloaded model: [0.01074994 0.53431499 0.37582642 0.41685265] New trained model: [0.00109601 0.00895661 0.00806308 0.03357983]

The actual match is the first image but as you can see the value of cosine distances are too small for the new model, which shows that the features are insufficient and every image looks same. Could you please tell me what I did wrong while training and reusing a new model.

ciwei123 commented 5 years ago

@aburney123 Hi,Have you found the reason? Can you get the same results as the author model through your own trained model?

ciwei123 commented 5 years ago

@aburney123 I have a model which has a better result in Mars dataset than the author's model, But I get a worse result in MOT16 dataset than the author's model.Could you tell me the reason?

nwojke commented 5 years ago

Hi aburney123, you won't be able to reproduce exact numbers for two reasons: (1) due to randomness in initialization and training, you won't end up with the exact same model each time, (2) we have made a few (small) changes to the network architecture between the original Deep SORT implementation and the re-identification experiments.

If you re-train a model, you should check if the cosine similarity threshold needs to be adapted. If for some reason you need to use the exact same architecture as in the original experiments, you will have to copy over network architecture code to the cosine_metric_learning repository. Again, note that even then, you will not end up with the exact same results for the reasons mentioned above.

aburney123 commented 5 years ago

Hi aburney123, you won't be able to reproduce exact numbers for two reasons: (1) due to randomness in initialization and training, you won't end up with the exact same model each time, (2) we have made a few (small) changes to the network architecture between the original Deep SORT implementation and the re-identification experiments.

If you re-train a model, you should check if the cosine similarity threshold needs to be adapted. If for some reason you need to use the exact same architecture as in the original experiments, you will have to copy over network architecture code to the cosine_metric_learning repository. Again, note that even then, you will not end up with the exact same results for the reasons mentioned above.

Thanks for the reply. That solves the dilemma of the differences and also the architecture difference