Closed Xiaofu233 closed 4 years ago
'Occluded Person re-dientification' splits the partial dataset to training set and testing set and uses the data on partial dataset for training, while our transfer setting uses whole partial dataset as testing set without any training on it, which also means the gallery set is larger when testing. Moreover, we compared our method with its follow-up algorithm 'teacher-S'[1] with the same setting, experimental results are shown in Talbe 2.
[1] Jiaxuan Zhuo, Jianhuang Lai, and Peijia Chen. A novel teacher-student learning framework for occluded person re- identification. arXiv preprint arXiv:1907.03253, 2019.
I have a further question for this.
So, how did you split training data and combine training and testing data?
For example,
training whole_body_images + testing whole_body_images => gallery
training occluded_body_images + testing occluded_body_images => query
Did you do this way or another way? I'd appreciate some help.
If you download the occluded dataset, you will find there is no split for training and testing. We use all occluded images as query, and whole body images as gallery.
I mean when you test on P-Dukemtmc-reID under transfer setting. Did you combine training and testing dataset in the following way right? (train: whole_body_images) + (test: whole_body_images) => gallery (train: occluded_body_images) + (test: occluded_body_images) => query Or Did you just test on P-Dukemtmc-reID testing dataset?
yes, I just test on P-duke testing dataset
I have one more question. When did you use combineall option in ImageDataManager? => combineall (bool, optional): combine train, query and gallery in a dataset for training.
On partial-ReID dataset, The paper"Occluded person re-identification" has achieved 78.52% rank-1 accuracy, outforming yours. Can you explain why there is no comparison with all existing methods in the experiment?