A pytorch implementation of the Local-Bottom Network (LB) in the paper:
Wu, Zifeng, et al. "A comprehensive study on cross-view gait based human identification with deep cnns." IEEE transactions on pattern analysis and machine intelligence 39.2 (2017): 209-226.
src/model.py
there are two models: LBNet and LBNet_1. LBNet_1 is more close to the model described in the section 4.2.1 of the original paper. You can select either one. The results are close to each other.mkdir snapshot
to build the directory for saving modelssrc
dir and run
python3 train.py
The model will be saved into the execution dir every 10000 iterations. You can change the interval in train.py.
python3 -m visdom.server -port 5274
or any port you like (change the port in train.py and test.py)http://localhost:5274
You will see the training loss curve and the validation accuracy curve.src
dir and run python3 test.py
. You can select which snapshot to use by modifying the checkpoint = th.load('../snapshot/snapshot_75000.pth')
to other snapshots. Be patient since it takes a long time. The computed similarities will be saved into similarity.npy
python3 compute_acc_per_angle
to compute the accuracy for each prove view and gallery view. The results will be saved into acc_table.csv
You will get a table like this.