Closed liaowang0125 closed 7 years ago
We randomly split the datasets for training and testing. Our trained model may not be directly used for evaluation on these six datasets. But you can finetune it on other datasets, or just train from scratch by yourself.
But I find that your paper show the JSTL+DGD results on six datasets without finetune, i think the results i get is that situation,right? And diffrent results is caused by randomly split the datasets. @Cysu
Right. What I mean is that you can safely use our trained model as an initial point for finetuning on other datasets, for example, market 1501. The higher results are caused by randomly splits.
Okay,i understand.Thank you very much.
Have you modified the code? I ran the code as the readme without any modification, but get terrible results. For example, the cuhk03 keep the following results in individual experiments, JSTL, JSTL+DGD and FT-(JSTL_DGD) top-1 9.0% top-5 45.4% top-10 88.7% top-20 95.8% The other datasets keep the same bad result.
cuhk03 top-1 93.8%
cuhk01 top-1 77.5%
prid top-1 69.0%
viper top-1 38.3%
3dpes top-1 53.5%
ilids top-1 62.4%