vimar-gu / Bias-Eliminate-DA-ReID

[VisDA2020 1st Place] Our solution to Domain Adaptive Pedestrian Re-identification in VisDA2020
MIT License
57 stars 9 forks source link

Bias Eliminate Domain Adaptive Pedestrian Re-identification [Technique Report]

This repo contains our code for VisDA2020 challenge at ECCV workshop.

Introduction

This work mainly solve the domain adaptive pedestrian re-identification problem by eliminishing the bias from inter-domain gap and intra-domain camera difference.

This project is mainly based on reid-strong-baseline.

Get Started

  1. Clone the repo git clone https://github.com/vimar-gu/Bias-Eliminate-DA-ReID.git
  2. Install dependencies:
    • pytorch >= 1.0.0
    • python >= 3.5
    • torchvision
    • yacs
  3. Prepare dataset. It can be obtained from Simon4Yan/VisDA2020.
  4. We use ResNet-ibn and HRNet as backbones. ImageNet pretrained models can be downloaded in here and here.

Run

If you want to reproduce our results, please refer to [VisDA.md]

Results

The performance on VisDA2020 validation dataset

Method mAP Rank-1 Rank-5 Rank-10
Basline 30.7 59.7 77.5 83.3
+ Domain Adaptation 44.9 75.3 86.7 91.0
+ Finetuning 48.6 79.8 88.3 91.5
+ Post Processing 70.9 86.5 92.8 94.4

Trained models

The models can be downloaded from:

The camera models can be downloaded from:

Some tips