Related Project: Strong Identification Loss Baseline
This project provides a strong triplet loss baseline in person re-identification, using pytorch.
Triplet loss with settings:
stride = 2
or stride = 1
in last conv blockim_w x im_h = 128 x 256
, ims_per_id = 4
, ids_per_batch = 32
The results are as follows. S1
and S2
means stride = 1
and stride = 2
respectively; R.R.
means using re-ranking.
Rank-1 (%) | mAP (%) | R.R. Rank-1 (%) | R.R. mAP (%) | |
---|---|---|---|---|
Market1501-S2 | 86.43 | 71.50 | 89.82 | 85.55 |
Market1501-S1 | 89.04 | 75.29 | 91.57 | 87.82 |
Duke-S2 | 78.82 | 61.09 | 83.98 | 79.15 |
Duke-S1 | 79.76 | 64.27 | 85.32 | 81.48 |
CUHK03-S2 | 56.36 | 50.82 | 65.21 | 65.76 |
CUHK03-S1 | 59.14 | 54.43 | 69.86 | 70.03 |
We see that stride = 1
(higher spatial resolution before global pooling) has obvious improvement over stride = 2
(original ResNet). I tried this inspired by paper Beyond Part Models: Person Retrieval with Refined Part Pooling.
Other details of setting can be found in the code. To test my trained models or reproduce these results, see the Examples section.
This repository contains following resources
It's recommended that you create and enter a python virtual environment, if versions of the packages required here conflict with yours.
I use Python 2.7 and Pytorch 0.3. For installing Pytorch, follow the official guide. Other packages are specified in requirements.txt
.
pip install -r requirements.txt
Then clone the repository:
git clone https://github.com/huanghoujing/person-reid-triplet-loss-baseline.git
cd person-reid-triplet-loss-baseline
Inspired by Tong Xiao's open-reid project, dataset directories are refactored to support a unified dataset interface.
Transformed dataset has following features
images
ori_to_new_im_name.pkl
. The mapping may be needed in some cases.partitions.pkl
which is a dict with the following keys
'trainval_im_names'
'trainval_ids2labels'
'train_im_names'
'train_ids2labels'
'val_im_names'
'val_marks'
'test_im_names'
'test_marks'
mark == 0
), ormark == 1
), ormark == 2
) setYou can download what I have transformed for the project from Google Drive or BaiduYun. Otherwise, you can download the original dataset and transform it using my script, described below.
Download the Market1501 dataset from here. Run the following script to transform the dataset, replacing the paths with yours.
python script/dataset/transform_market1501.py \
--zip_file ~/Dataset/market1501/Market-1501-v15.09.15.zip \
--save_dir ~/Dataset/market1501
We follow the new training/testing protocol proposed in paper
@article{zhong2017re,
title={Re-ranking Person Re-identification with k-reciprocal Encoding},
author={Zhong, Zhun and Zheng, Liang and Cao, Donglin and Li, Shaozi},
booktitle={CVPR},
year={2017}
}
Details of the new protocol can be found here.
You can download what I have transformed for the project from Google Drive or BaiduYun. Otherwise, you can download the original dataset and transform it using my script, described below.
Download the CUHK03 dataset from here. Then download the training/testing partition file from Google Drive or BaiduYun. This partition file specifies which images are in training, query or gallery set. Finally run the following script to transform the dataset, replacing the paths with yours.
python script/dataset/transform_cuhk03.py \
--zip_file ~/Dataset/cuhk03/cuhk03_release.zip \
--train_test_partition_file ~/Dataset/cuhk03/re_ranking_train_test_split.pkl \
--save_dir ~/Dataset/cuhk03
You can download what I have transformed for the project from Google Drive or BaiduYun. Otherwise, you can download the original dataset and transform it using my script, described below.
Download the DukeMTMC-reID dataset from here. Run the following script to transform the dataset, replacing the paths with yours.
python script/dataset/transform_duke.py \
--zip_file ~/Dataset/duke/DukeMTMC-reID.zip \
--save_dir ~/Dataset/duke
Larger training set tends to benefit deep learning models, so I combine trainval set of three datasets Market1501, CUHK03 and DukeMTMC-reID. After training on the combined trainval set, the model can be tested on three test sets as usual.
Transform three separate datasets as introduced above if you have not done it.
For the trainval set, you can download what I have transformed from Google Drive or BaiduYun. Otherwise, you can run the following script to combine the trainval sets, replacing the paths with yours.
python script/dataset/combine_trainval_sets.py \
--market1501_im_dir ~/Dataset/market1501/images \
--market1501_partition_file ~/Dataset/market1501/partitions.pkl \
--cuhk03_im_dir ~/Dataset/cuhk03/detected/images \
--cuhk03_partition_file ~/Dataset/cuhk03/detected/partitions.pkl \
--duke_im_dir ~/Dataset/duke/images \
--duke_partition_file ~/Dataset/duke/partitions.pkl \
--save_dir ~/Dataset/market1501_cuhk03_duke
The project requires you to configure the dataset paths. In tri_loss/dataset/__init__.py
, modify the following snippet according to your saving paths used in preparing datasets.
# In file tri_loss/dataset/__init__.py
########################################
# Specify Directory and Partition File #
########################################
if name == 'market1501':
im_dir = ospeu('~/Dataset/market1501/images')
partition_file = ospeu('~/Dataset/market1501/partitions.pkl')
elif name == 'cuhk03':
im_type = ['detected', 'labeled'][0]
im_dir = ospeu(ospj('~/Dataset/cuhk03', im_type, 'images'))
partition_file = ospeu(ospj('~/Dataset/cuhk03', im_type, 'partitions.pkl'))
elif name == 'duke':
im_dir = ospeu('~/Dataset/duke/images')
partition_file = ospeu('~/Dataset/duke/partitions.pkl')
elif name == 'combined':
assert part in ['trainval'], \
"Only trainval part of the combined dataset is available now."
im_dir = ospeu('~/Dataset/market1501_cuhk03_duke/trainval_images')
partition_file = ospeu('~/Dataset/market1501_cuhk03_duke/partitions.pkl')
Datasets used in this project all follow the standard evaluation protocol of Market1501, using CMC and mAP metric. According to open-reid, the setting of CMC is as follows
# In file tri_loss/dataset/__init__.py
cmc_kwargs = dict(separate_camera_set=False,
single_gallery_shot=False,
first_match_break=True)
To play with different CMC options, you can modify it accordingly.
# In open-reid's reid/evaluators.py
# Compute all kinds of CMC scores
cmc_configs = {
'allshots': dict(separate_camera_set=False,
single_gallery_shot=False,
first_match_break=False),
'cuhk03': dict(separate_camera_set=True,
single_gallery_shot=True,
first_match_break=False),
'market1501': dict(separate_camera_set=False,
single_gallery_shot=False,
first_match_break=True)}
You can use a trained model to extract features for a list of images, and then perform whatever you desire with these features. An example is
python script/experiment/infer_images_example.py \
--model_weight_file YOUR_MODEL_WEIGHT_FILE
My training log and saved model weights for three datasets can be downloaded from Google Drive or BaiduYun.
Specify
market1501
, cuhk03
, duke
)1
or 2
model_weight.pth
in the following command and run it.
python script/experiment/train.py \
-d '(0,)' \
--only_test true \
--dataset DATASET_NAME \
--last_conv_stride STRIDE \
--normalize_feature false \
--exp_dir EXPERIMENT_DIRECTORY \
--model_weight_file THE_DOWNLOADED_MODEL_WEIGHT_FILE
You can also train it by yourself. The following command performs training, validation and finally testing automatically.
Specify
['market1501', 'cuhk03', 'duke']
)1
or 2
trainval
set or train
set (for tuning parameters)in the following command and run it.
python script/experiment/train.py \
-d '(0,)' \
--only_test false \
--dataset DATASET_NAME \
--last_conv_stride STRIDE \
--normalize_feature false \
--trainset_part TRAINVAL_OR_TRAIN \
--exp_dir EXPERIMENT_DIRECTORY \
--steps_per_log 10 \
--epochs_per_val 5
During training, you can run the TensorBoard and access port 6006
to watch the loss curves etc. E.g.
# Modify the path for `--logdir` accordingly.
tensorboard --logdir YOUR_EXPERIMENT_DIRECTORY/tensorboard
For more usage of TensorBoard, see the website and the help:
tensorboard --help
Specify
['market1501', 'cuhk03', 'duke']
)1
or 2
model_weight_file
(the downloaded model_weight.pth
) OR ckpt_file
(saved ckpt.pth
during training)in the following command and run it.
python script/experiment/visualize_rank_list.py \
-d '(0,)' \
--num_queries 16 \
--rank_list_size 10 \
--dataset DATASET_NAME \
--last_conv_stride STRIDE \
--normalize_feature false \
--exp_dir EXPERIMENT_DIRECTORY \
--model_weight_file '' \
--ckpt_file ''
Each query image and its ranking list would be saved to an image in directory EXPERIMENT_DIRECTORY/rank_lists
. As shown in following examples, green boundary is added to true positive, and red to false positve.
Test with CentOS 7, Intel(R) Xeon(R) CPU E5-2618L v3 @ 2.30GHz, GeForce GTX TITAN X.
Note that the following time consumption is not gauranteed across machines, especially when the system is busy.
For following settings
stride=1
in last blockidentities_per_batch = 32
, images_per_identity = 4
, images_per_batch = 32 x 4 = 128
h x w = 256 x 128
it occupies ~11000MB GPU memory.
If not having a 12 GB GPU, you have to either decrease identities_per_batch
or use multiple GPUs.
Taking Market1501 as an example, it contains 31969
training images of 751
identities, thus 1 epoch = 751 / 32 = 24 iterations
. Each iteration takes ~1.08s, so each epoch ~27s. Training for 300 epochs takes ~2.25 hours.
Taking Market1501 as an example
images_per_batch = 32
, extracting feature of whole test set (12936 images) takes ~160s.3368 x 15913
matrix, ~2s3368 x 3368
matrix) and gallery-gallery distance (a 15913 x 15913
matrix, most time-consuming), ~90s