NVlabs / DG-Net

:couple: Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral) :couple:
https://www.zdzheng.xyz/publication/Joint-di2019
Other
1.27k stars 230 forks source link

Interpreting the results of the score in evaluate_gpu.py #48

Closed ksuhartono97 closed 4 years ago

ksuhartono97 commented 4 years ago

Hi, I am trying to understand what is the range of the score that comes out from the evaluate function

Putting the function here as reference

def evaluate(qf,ql,qc,gf,gl,gc):
    query = qf.view(-1,1)
    score = torch.mm(gf,query)
    score = score.squeeze(1).cpu()
    score = score.numpy()

    .....

I expected the scores to be in a range from 0 to 1, but it seems that the results here show values that can go negative and beyond 1. Is this expected behaviour? If yes, what is the score range as I would like to normalize these values.

layumi commented 4 years ago

Hi @ksuhartono97

For the cosine similarity, the score is in [-1,1].

For this repo, I normalize and then concatenate the two features at https://github.com/NVlabs/DG-Net/blob/master/reid_eval/test_2label.py#L137-L138

So the value of score is [-2,2]. You may have a try.

ksuhartono97 commented 4 years ago

I see thanks!