Gait3D / Gait3D-Benchmark

This is the code for the paper "Gait Recognition in the Wild with Dense 3D Representations and A Benchmark. (CVPR 2022)", "Gait Recognition in the Wild with Multi-hop Temporal Switch", and "Parsing is All You Need for Accurate Gait Recognition in the Wild".
132 stars 19 forks source link

About the ‘Cross Domain’ issues #19

Open updatelse opened 1 year ago

updatelse commented 1 year ago

Excuse me, is there any difference between GREW_our_split.json file and GREW_office_split.json file in this project? Why would you think of re dividing the test set to evaluate model generalization performance?

Because when I was researching gait recognition, I found that the model trained on the GREW dataset did not perform well in Gait3D evaluation. It seems that when a gait recognition model is trained on one dataset, its metrics will sharply decrease when it is transferred to another dataset for evaluation. What do you think?

JinkaiZheng commented 1 year ago

Hi~ Because the GREW's official evaluation website was not available when we worked on Gait3D. To achieve the cross-domain experiment, we randomly sample 1,000 IDs from the training set for evaluation and the rest 19,000 IDs for training. These instructions can be found in Section 2.2.1 of the supplementary material.

When we release this repo, we found that the GREW's official evaluation website was already open. Therefore, we add "GREW_office_split.json" in this repo.

Gait3D and GREW are collected from different domains, so there is a domain gap between them. How to improve the performance of the domain adaptation is an interesting and practical work. Perhaps you can collect some papers related to "domain adaptation" to get some inspiration.