lsongx / DomainAdaptiveReID

MIT License
188 stars 50 forks source link

About the results #5

Closed SWRDFK closed 5 years ago

SWRDFK commented 5 years ago

hi, author. I'm confused with the results shown by Table 2, Table 4 and Table 5. I conclude the following sentenses from them, do they right? (1) For clustering method, both DBSCAN and Affinity Propagation perform higher when uses Euclidean distance, rather than ours (Jaccard distance and dW). (2) Because of the difference of the results, I think 'Ours w/o dW' in Table 2 is not same as 'Jaccard distance (with dW)' in Table 4, and 'Jaccard distance (with dW)' should be compared with self-training baseline. But I don't know what 'Ours w/o dW' means? Thank you for your reply!

lsongx commented 5 years ago

hi @SWRDFK , Jaccard distance is not the same with k-reciprocal encoding (eqn 8). Thus 'ours' is not 'Jaccard distance and dW'. You may refer to [35] for further info. 'Ours w/o dW' means only using k-reciprocal encoding.

Maybe the notation $d_J$ used in Eqn 8 is not approriate since $d_J$ is not Jaccard distance, really sorry for that. (Actually we just follow the notation in [35].)

[35] Zhun Zhong, Liang Zheng, Donglin Cao, and Shaozi Li. Re-ranking person re-identification with 
 reciprocal encoding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages
3652–3661. IEEE, 2017
SWRDFK commented 5 years ago

I appreciate for your sincere answer! I have made a mistake before and understand this now. The k-reciprocal encoding here (Eq8) is a variation of Jaccard distance, and they just looks like the same. Results shows the great superiority of k-reciprocal encoding when refering to Euclidean distance.