JDAI-CV / fast-reid

SOTA Re-identification Methods and Toolbox
Apache License 2.0
3.43k stars 838 forks source link

rank-1 accuracy of model is very different the baseline #462

Closed duerwen closed 3 years ago

duerwen commented 3 years ago

您好,模型训练过程加载预训练的模型时,会提示如下信息 [04/14 20:15:15 fastreid.modeling.backbones.resnet]: Loading pretrained model from /home/web/.cache/torch/checkpoints/resnet50_ibn_a-d9d0bb7b.pth [04/14 20:15:15 fastreid.modeling.backbones.resnet]: The checkpoint state_dict contains keys that are not used by the model: fc.{weight, bias} ,最后训练的效果与代码文档中给出的训练进度相差较大。

配置:mgn_R50-ibn.yml
CUDNN_BENCHMARK: true
DATALOADER:
  NAIVE_WAY: true
  NUM_INSTANCE: 16
  NUM_WORKERS: 8
  PK_SAMPLER: true
DATASETS:
  COMBINEALL: false
  NAMES:
  - DukeMTMC
  TESTS:
  - DukeMTMC
INPUT:
  AUGMIX_PROB: 0.0
  AUTOAUG_PROB: 0.1
  CJ:
    BRIGHTNESS: 0.15
    CONTRAST: 0.15
    ENABLED: false
    HUE: 0.1
    PROB: 0.5
    SATURATION: 0.1
  DO_AFFINE: false
  DO_AUGMIX: false
  DO_AUTOAUG: true
  DO_FLIP: true
  DO_PAD: true
  FLIP_PROB: 0.5
  PADDING: 10
  PADDING_MODE: constant
  REA:
    ENABLED: true
    PROB: 0.5
    VALUE:
    - 123.675
    - 116.28
    - 103.53
  RPT:
    ENABLED: false
    PROB: 0.5
  SIZE_TEST:
  - 384
  - 128
  SIZE_TRAIN:
  - 384
  - 128
KD:
  MODEL_CONFIG:
  - ''
  MODEL_WEIGHTS:
  - ''
MODEL:
  BACKBONE:
    DEPTH: 50x
    FEAT_DIM: 2048
    LAST_STRIDE: 1
    NAME: build_resnet_backbone
    NORM: BN
    PRETRAIN: true
    PRETRAIN_PATH: ''
    WITH_IBN: true
    WITH_NL: false
    WITH_SE: false
  DEVICE: cuda
  FREEZE_LAYERS:
  - backbone
  - b1
  - b2
  - b3
  HEADS:
    CLS_LAYER: circleSoftmax
    EMBEDDING_DIM: 256
    MARGIN: 0.35
    NAME: EmbeddingHead
    NECK_FEAT: after
    NORM: BN
    NUM_CLASSES: 702
    POOL_LAYER: gempoolP
    SCALE: 64
    WITH_BNNECK: true
  LOSSES:
    CE:
      ALPHA: 0.2
      EPSILON: 0.1
      SCALE: 1.0
    CIRCLE:
      GAMMA: 128
      MARGIN: 0.25
      SCALE: 1.0
    COSFACE:
      GAMMA: 128
      MARGIN: 0.25
      SCALE: 1.0
    FL:
      ALPHA: 0.25
      GAMMA: 2
      SCALE: 1.0
    NAME:
    - CrossEntropyLoss
    - TripletLoss
    TRI:
      HARD_MINING: true
      MARGIN: 0.0
      NORM_FEAT: false
      SCALE: 1.0
  META_ARCHITECTURE: MGN
  PIXEL_MEAN:
  - 123.675
  - 116.28
  - 103.53
  PIXEL_STD:
  - 58.395
  - 57.120000000000005
  - 57.375
  QUEUE_SIZE: 8192
  WEIGHTS: ''
OUTPUT_DIR: logs/dukemtmc/mgn_R50-ibn
SOLVER:
  BASE_LR: 0.00035
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 20
  DELAY_EPOCHS: 30
  ETA_MIN_LR: 7.0e-07
  FP16_ENABLED: false
  FREEZE_FC_ITERS: 0
  FREEZE_ITERS: 1000
  GAMMA: 0.1
  HEADS_LR_FACTOR: 1.0
  IMS_PER_BATCH: 64
  MAX_EPOCH: 60
  MOMENTUM: 0.9
  NESTEROV: true
  OPT: Adam
  SCHED: CosineAnnealingLR
  STEPS:
  - 40
  - 90
  WARMUP_FACTOR: 0.1
  WARMUP_ITERS: 2000
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
TEST:
  AQE:
    ALPHA: 3.0
    ENABLED: false
    QE_K: 5
    QE_TIME: 1
  EVAL_PERIOD: 10
  FLIP_ENABLED: false
  IMS_PER_BATCH: 128
  METRIC: cosine
  PRECISE_BN:
    DATASET: Market1501
    ENABLED: false
    NUM_ITER: 300
  RERANK:
    ENABLED: false
    K1: 20
    K2: 6
    LAMBDA: 0.3
  ROC_ENABLED: false

日志如下:
ssh://web@10.0.10.202:22/home/web/dww/FastReId-Env/bin/python3.6 -u /home/web/dww/works/remote-fastReidV1.0-node2-fromBJB/tools/train_net.py --config-file ../configs/DukeMTMC/mgn_R50-ibn.yml --num-gpus 2
Command Line Args: Namespace(config_file='../configs/DukeMTMC/mgn_R50-ibn.yml', dist_url='tcp://127.0.0.1:50153', eval_only=False, machine_rank=0, num_gpus=2, num_machines=1, opts=[], resume=False)
[04/14 20:15:13 fastreid]: Rank of current process: 0. World size: 2
[04/14 20:15:14 fastreid]: Environment info:
----------------------  -------------------------------------------------------------------------------
sys.platform            linux
Python                  3.6.6 (default, Aug 13 2018, 18:24:23) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
numpy                   1.19.5
fastreid                1.0.0 @/home/web/dww/works/remote-fastReidV1.0-node2-fromBJB/fastreid
FASTREID_ENV_MODULE     <not set>
PyTorch                 1.6.0+cu101 @/home/web/dww/FastReId-Env/lib/python3.6/site-packages/torch
PyTorch debug build     False
GPU available           True
GPU 0,1                 TITAN V
CUDA_HOME               /usr/local/cuda
Pillow                  8.2.0
torchvision             0.7.0+cu101 @/home/web/dww/FastReId-Env/lib/python3.6/site-packages/torchvision
torchvision arch flags  sm_35, sm_50, sm_60, sm_70, sm_75
----------------------  -------------------------------------------------------------------------------
PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 10.1
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75
  - CuDNN 7.6.3
  - Magma 2.5.2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

[04/14 20:15:14 fastreid]: Command line arguments: Namespace(config_file='../configs/DukeMTMC/mgn_R50-ibn.yml', dist_url='tcp://127.0.0.1:50153', eval_only=False, machine_rank=0, num_gpus=2, num_machines=1, opts=[], resume=False)
[04/14 20:15:14 fastreid]: Contents of args.config_file=../configs/DukeMTMC/mgn_R50-ibn.yml:
_BASE_: ../Base-MGN.yml

MODEL:
  BACKBONE:
    WITH_IBN: True

DATASETS:
  NAMES: ("DukeMTMC",)
  TESTS: ("DukeMTMC",)

OUTPUT_DIR: logs/dukemtmc/mgn_R50-ibn

[04/14 20:15:14 fastreid]: Running with full config:
CUDNN_BENCHMARK: True
DATALOADER:
  NAIVE_WAY: True
  NUM_INSTANCE: 16
  NUM_WORKERS: 8
  PK_SAMPLER: True
DATASETS:
  COMBINEALL: False
  NAMES: ('DukeMTMC',)
  TESTS: ('DukeMTMC',)
INPUT:
  AUGMIX_PROB: 0.0
  AUTOAUG_PROB: 0.1
  CJ:
    BRIGHTNESS: 0.15
    CONTRAST: 0.15
    ENABLED: False
    HUE: 0.1
    PROB: 0.5
    SATURATION: 0.1
  DO_AFFINE: False
  DO_AUGMIX: False
  DO_AUTOAUG: True
  DO_FLIP: True
  DO_PAD: True
  FLIP_PROB: 0.5
  PADDING: 10
  PADDING_MODE: constant
  REA:
    ENABLED: True
    PROB: 0.5
    VALUE: [123.675, 116.28, 103.53]
  RPT:
    ENABLED: False
    PROB: 0.5
  SIZE_TEST: [384, 128]
  SIZE_TRAIN: [384, 128]
KD:
  MODEL_CONFIG: ['']
  MODEL_WEIGHTS: ['']
MODEL:
  BACKBONE:
    DEPTH: 50x
    FEAT_DIM: 2048
    LAST_STRIDE: 1
    NAME: build_resnet_backbone
    NORM: BN
    PRETRAIN: True
    PRETRAIN_PATH: 
    WITH_IBN: True
    WITH_NL: False
    WITH_SE: False
  DEVICE: cuda
  FREEZE_LAYERS: ['backbone', 'b1', 'b2', 'b3']
  HEADS:
    CLS_LAYER: circleSoftmax
    EMBEDDING_DIM: 256
    MARGIN: 0.35
    NAME: EmbeddingHead
    NECK_FEAT: after
    NORM: BN
    NUM_CLASSES: 0
    POOL_LAYER: gempoolP
    SCALE: 64
    WITH_BNNECK: True
  LOSSES:
    CE:
      ALPHA: 0.2
      EPSILON: 0.1
      SCALE: 1.0
    CIRCLE:
      GAMMA: 128
      MARGIN: 0.25
      SCALE: 1.0
    COSFACE:
      GAMMA: 128
      MARGIN: 0.25
      SCALE: 1.0
    FL:
      ALPHA: 0.25
      GAMMA: 2
      SCALE: 1.0
    NAME: ('CrossEntropyLoss', 'TripletLoss')
    TRI:
      HARD_MINING: True
      MARGIN: 0.0
      NORM_FEAT: False
      SCALE: 1.0
  META_ARCHITECTURE: MGN
  PIXEL_MEAN: [123.675, 116.28, 103.53]
  PIXEL_STD: [58.395, 57.120000000000005, 57.375]
  QUEUE_SIZE: 8192
  WEIGHTS: 
OUTPUT_DIR: logs/dukemtmc/mgn_R50-ibn
SOLVER:
  BASE_LR: 0.00035
  BIAS_LR_FACTOR: 1.0
  CHECKPOINT_PERIOD: 20
  DELAY_EPOCHS: 30
  ETA_MIN_LR: 7e-07
  FP16_ENABLED: False
  FREEZE_FC_ITERS: 0
  FREEZE_ITERS: 1000
  GAMMA: 0.1
  HEADS_LR_FACTOR: 1.0
  IMS_PER_BATCH: 64
  MAX_EPOCH: 60
  MOMENTUM: 0.9
  NESTEROV: True
  OPT: Adam
  SCHED: CosineAnnealingLR
  STEPS: [40, 90]
  WARMUP_FACTOR: 0.1
  WARMUP_ITERS: 2000
  WARMUP_METHOD: linear
  WEIGHT_DECAY: 0.0005
  WEIGHT_DECAY_BIAS: 0.0005
TEST:
  AQE:
    ALPHA: 3.0
    ENABLED: False
    QE_K: 5
    QE_TIME: 1
  EVAL_PERIOD: 10
  FLIP_ENABLED: False
  IMS_PER_BATCH: 128
  METRIC: cosine
  PRECISE_BN:
    DATASET: Market1501
    ENABLED: False
    NUM_ITER: 300
  RERANK:
    ENABLED: False
    K1: 20
    K2: 6
    LAMBDA: 0.3
  ROC_ENABLED: False
[04/14 20:15:14 fastreid]: Full config saved to /home/web/dww/works/remote-fastReidV1.0-node2-fromBJB/tools/logs/dukemtmc/mgn_R50-ibn/config.yaml
[04/14 20:15:14 fastreid.utils.env]: Using a generated random seed 14478653
[04/14 20:15:14 fastreid.engine.defaults]: Prepare training set
[04/14 20:15:14 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| train    | 702     | 16522      | 8           |
[04/14 20:15:14 fastreid.engine.defaults]: Auto-scaling the num_classes=702
[04/14 20:15:15 fastreid.modeling.backbones.resnet]: Loading pretrained model from /home/web/.cache/torch/checkpoints/resnet50_ibn_a-d9d0bb7b.pth
[04/14 20:15:15 fastreid.modeling.backbones.resnet]: The checkpoint state_dict contains keys that are not used by the model:
  fc.{weight, bias}
[04/14 20:15:15 fastreid.engine.defaults]: Model:
MGN(
  (backbone): Sequential(
    (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (1): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (2): ReLU(inplace=True)
    (3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
    (4): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
        (downsample): Sequential(
          (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
    (5): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn2): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
        (downsample): Sequential(
          (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
          (1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (3): Bottleneck(
        (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
    (6): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): IBN(
        (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
        (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (se): Identity()
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
  )
  (b1): Sequential(
    (0): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (1): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (3): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (4): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
    (1): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
        (downsample): Sequential(
          (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
  )
  (b1_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b2): Sequential(
    (0): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (1): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (3): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (4): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
    (1): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
        (downsample): Sequential(
          (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
  )
  (b2_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b21_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b22_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b3): Sequential(
    (0): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (1): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (3): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (4): Bottleneck(
        (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): IBN(
          (IN): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)
          (BN): BatchNorm(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
    (1): Sequential(
      (0): Bottleneck(
        (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
        (downsample): Sequential(
          (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (1): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
      (2): Bottleneck(
        (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn2): BatchNorm(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (se): Identity()
      )
    )
  )
  (b3_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b31_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b32_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
  (b33_head): EmbeddingHead(
    (pool_layer): GeneralizedMeanPoolingP(Parameter containing:
    tensor([3.], device='cuda:0', requires_grad=True), output_size=1)
    (bottleneck): Sequential(
      (0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (1): BatchNorm(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): CircleSoftmax(in_features=256, num_classes=702, scale=64, margin=0.35)
  )
)
Warning:  apex was installed without --cpp_ext.  Falling back to Python flatten and unflatten.
Warning:  apex was installed without --cpp_ext.  Falling back to Python flatten and unflatten.
[04/14 20:15:28 fastreid.utils.checkpoint]: No checkpoint found. Training model from scratch
[04/14 20:15:28 fastreid.engine.train_loop]: Starting training from epoch 0
[04/14 20:15:28 fastreid.engine.hooks]: Freeze layer group "backbone, b1, b2, b3" training for 1000 iterations
[04/14 20:15:59 fastreid.utils.events]:  eta: 0:38:07  epoch/iter: 0/199  total_loss: 56.92  loss_cls_b1: 6.294  loss_cls_b2: 6.301  loss_cls_b21: 6.31  loss_cls_b22: 6.32  loss_cls_b3: 6.326  loss_cls_b31: 6.294  loss_cls_b32: 6.329  loss_cls_b33: 6.323  loss_triplet_b1: 1.16  loss_triplet_b2: 1.176  loss_triplet_b3: 1.151  loss_triplet_b22: 1.363  loss_triplet_b33: 1.553  time: 0.1500  data_time: 0.0008  lr: 6.63e-05  max_mem: 5116M
[04/14 20:16:08 fastreid.utils.events]:  eta: 0:37:59  epoch/iter: 0/257  total_loss: 56.73  loss_cls_b1: 6.283  loss_cls_b2: 6.279  loss_cls_b21: 6.316  loss_cls_b22: 6.289  loss_cls_b3: 6.306  loss_cls_b31: 6.281  loss_cls_b32: 6.316  loss_cls_b33: 6.304  loss_triplet_b1: 1.139  loss_triplet_b2: 1.149  loss_triplet_b3: 1.126  loss_triplet_b22: 1.316  loss_triplet_b33: 1.483  time: 0.1501  data_time: 0.0010  lr: 7.55e-05  max_mem: 5116M
[04/14 20:16:30 fastreid.utils.events]:  eta: 0:37:54  epoch/iter: 1/399  total_loss: 55.97  loss_cls_b1: 6.24  loss_cls_b2: 6.224  loss_cls_b21: 6.237  loss_cls_b22: 6.241  loss_cls_b3: 6.263  loss_cls_b31: 6.254  loss_cls_b32: 6.269  loss_cls_b33: 6.263  loss_triplet_b1: 1.071  loss_triplet_b2: 1.065  loss_triplet_b3: 1.07  loss_triplet_b22: 1.257  loss_triplet_b33: 1.397  time: 0.1514  data_time: 0.0007  lr: 9.78e-05  max_mem: 5116M
[04/14 20:16:48 fastreid.utils.events]:  eta: 0:37:37  epoch/iter: 1/515  total_loss: 55.2  loss_cls_b1: 6.168  loss_cls_b2: 6.168  loss_cls_b21: 6.18  loss_cls_b22: 6.188  loss_cls_b3: 6.209  loss_cls_b31: 6.189  loss_cls_b32: 6.21  loss_cls_b33: 6.202  loss_triplet_b1: 1.017  loss_triplet_b2: 1.012  loss_triplet_b3: 1.01  loss_triplet_b22: 1.179  loss_triplet_b33: 1.322  time: 0.1518  data_time: 0.0008  lr: 1.16e-04  max_mem: 5116M
[04/14 20:17:00 fastreid.utils.events]:  eta: 0:37:22  epoch/iter: 2/599  total_loss: 54.71  loss_cls_b1: 6.157  loss_cls_b2: 6.132  loss_cls_b21: 6.129  loss_cls_b22: 6.162  loss_cls_b3: 6.191  loss_cls_b31: 6.153  loss_cls_b32: 6.188  loss_cls_b33: 6.187  loss_triplet_b1: 0.9899  loss_triplet_b2: 0.9673  loss_triplet_b3: 0.9687  loss_triplet_b22: 1.12  loss_triplet_b33: 1.254  time: 0.1516  data_time: 0.0010  lr: 1.29e-04  max_mem: 5116M
[04/14 20:17:27 fastreid.utils.events]:  eta: 0:36:57  epoch/iter: 2/773  total_loss: 53.6  loss_cls_b1: 6.072  loss_cls_b2: 6.05  loss_cls_b21: 6.05  loss_cls_b22: 6.083  loss_cls_b3: 6.081  loss_cls_b31: 6.064  loss_cls_b32: 6.081  loss_cls_b33: 6.079  loss_triplet_b1: 0.915  loss_triplet_b2: 0.8932  loss_triplet_b3: 0.8931  loss_triplet_b22: 1.043  loss_triplet_b33: 1.19  time: 0.1516  data_time: 0.0008  lr: 1.57e-04  max_mem: 5116M
[04/14 20:17:31 fastreid.utils.events]:  eta: 0:36:52  epoch/iter: 3/799  total_loss: 53.51  loss_cls_b1: 6.047  loss_cls_b2: 6.031  loss_cls_b21: 6.04  loss_cls_b22: 6.073  loss_cls_b3: 6.038  loss_cls_b31: 6.035  loss_cls_b32: 6.072  loss_cls_b33: 6.068  loss_triplet_b1: 0.9184  loss_triplet_b2: 0.9125  loss_triplet_b3: 0.9015  loss_triplet_b22: 1.05  loss_triplet_b33: 1.206  time: 0.1515  data_time: 0.0007  lr: 1.61e-04  max_mem: 5116M
[04/14 20:18:01 fastreid.utils.events]:  eta: 0:36:27  epoch/iter: 3/999  total_loss: 52.16  loss_cls_b1: 5.901  loss_cls_b2: 5.908  loss_cls_b21: 5.893  loss_cls_b22: 5.932  loss_cls_b3: 5.938  loss_cls_b31: 5.9  loss_cls_b32: 5.965  loss_cls_b33: 5.964  loss_triplet_b1: 0.8682  loss_triplet_b2: 0.8677  loss_triplet_b3: 0.8778  loss_triplet_b22: 0.9924  loss_triplet_b33: 1.137  time: 0.1517  data_time: 0.0008  lr: 1.92e-04  max_mem: 5116M
[04/14 20:18:01 fastreid.engine.hooks]: Open layer group "backbone, b1, b2, b3" training
[04/14 20:18:17 fastreid.utils.events]:  eta: 0:36:27  epoch/iter: 3/1031  total_loss: 52.17  loss_cls_b1: 5.925  loss_cls_b2: 5.911  loss_cls_b21: 5.907  loss_cls_b22: 5.942  loss_cls_b3: 5.959  loss_cls_b31: 5.906  loss_cls_b32: 5.961  loss_cls_b33: 5.99  loss_triplet_b1: 0.8538  loss_triplet_b2: 0.8531  loss_triplet_b3: 0.856  loss_triplet_b22: 0.9695  loss_triplet_b33: 1.114  time: 0.1621  data_time: 0.0006  lr: 1.97e-04  max_mem: 9468M
[04/14 20:19:28 fastreid.utils.events]:  eta: 0:36:33  epoch/iter: 4/1199  total_loss: 50.11  loss_cls_b1: 5.902  loss_cls_b2: 5.871  loss_cls_b21: 5.84  loss_cls_b22: 5.91  loss_cls_b3: 5.876  loss_cls_b31: 5.834  loss_cls_b32: 5.904  loss_cls_b33: 5.919  loss_triplet_b1: 0.6028  loss_triplet_b2: 0.5707  loss_triplet_b3: 0.5778  loss_triplet_b22: 0.6288  loss_triplet_b33: 0.648  time: 0.1983  data_time: 0.0007  lr: 2.24e-04  max_mem: 9468M
[04/14 20:20:06 fastreid.utils.events]:  eta: 0:36:38  epoch/iter: 4/1289  total_loss: 48.93  loss_cls_b1: 5.787  loss_cls_b2: 5.794  loss_cls_b21: 5.736  loss_cls_b22: 5.801  loss_cls_b3: 5.724  loss_cls_b31: 5.73  loss_cls_b32: 5.837  loss_cls_b33: 5.782  loss_triplet_b1: 0.5325  loss_triplet_b2: 0.4762  loss_triplet_b3: 0.49  loss_triplet_b22: 0.5158  loss_triplet_b33: 0.5716  time: 0.2139  data_time: 0.0009  lr: 2.38e-04  max_mem: 9468M
[04/14 20:20:52 fastreid.utils.events]:  eta: 0:37:05  epoch/iter: 5/1399  total_loss: 46.89  loss_cls_b1: 5.517  loss_cls_b2: 5.548  loss_cls_b21: 5.49  loss_cls_b22: 5.626  loss_cls_b3: 5.535  loss_cls_b31: 5.547  loss_cls_b32: 5.62  loss_cls_b33: 5.615  loss_triplet_b1: 0.483  loss_triplet_b2: 0.4395  loss_triplet_b3: 0.4372  loss_triplet_b22: 0.4857  loss_triplet_b33: 0.5153  time: 0.2300  data_time: 0.0007  lr: 2.55e-04  max_mem: 9468M
[04/14 20:21:54 fastreid.utils.events]:  eta: 1:35:20  epoch/iter: 5/1547  total_loss: 45.11  loss_cls_b1: 5.301  loss_cls_b2: 5.319  loss_cls_b21: 5.283  loss_cls_b22: 5.443  loss_cls_b3: 5.362  loss_cls_b31: 5.324  loss_cls_b32: 5.435  loss_cls_b33: 5.393  loss_triplet_b1: 0.4436  loss_triplet_b2: 0.398  loss_triplet_b3: 0.4052  loss_triplet_b22: 0.4381  loss_triplet_b33: 0.4771  time: 0.2480  data_time: 0.0008  lr: 2.79e-04  max_mem: 9468M
[04/14 20:22:16 fastreid.utils.events]:  eta: 1:35:27  epoch/iter: 6/1599  total_loss: 43.84  loss_cls_b1: 5.185  loss_cls_b2: 5.201  loss_cls_b21: 5.222  loss_cls_b22: 5.331  loss_cls_b3: 5.129  loss_cls_b31: 5.217  loss_cls_b32: 5.32  loss_cls_b33: 5.294  loss_triplet_b1: 0.4032  loss_triplet_b2: 0.3601  loss_triplet_b3: 0.3867  loss_triplet_b22: 0.3921  loss_triplet_b33: 0.4469  time: 0.2537  data_time: 0.0007  lr: 2.87e-04  max_mem: 9468M
[04/14 20:23:40 fastreid.utils.events]:  eta: 1:34:59  epoch/iter: 6/1799  total_loss: 42.06  loss_cls_b1: 4.93  loss_cls_b2: 4.9  loss_cls_b21: 4.936  loss_cls_b22: 5.103  loss_cls_b3: 4.881  loss_cls_b31: 4.983  loss_cls_b32: 5.141  loss_cls_b33: 5.144  loss_triplet_b1: 0.375  loss_triplet_b2: 0.3457  loss_triplet_b3: 0.3372  loss_triplet_b22: 0.3697  loss_triplet_b33: 0.4141  time: 0.2721  data_time: 0.0008  lr: 3.18e-04  max_mem: 9468M
[04/14 20:23:42 fastreid.utils.events]:  eta: 1:34:57  epoch/iter: 6/1805  total_loss: 42  loss_cls_b1: 4.899  loss_cls_b2: 4.892  loss_cls_b21: 4.936  loss_cls_b22: 5.095  loss_cls_b3: 4.881  loss_cls_b31: 4.978  loss_cls_b32: 5.125  loss_cls_b33: 5.111  loss_triplet_b1: 0.378  loss_triplet_b2: 0.3481  loss_triplet_b3: 0.3439  loss_triplet_b22: 0.3731  loss_triplet_b33: 0.4145  time: 0.2726  data_time: 0.0008  lr: 3.19e-04  max_mem: 9468M
[04/14 20:25:04 fastreid.utils.events]:  eta: 1:34:02  epoch/iter: 7/1999  total_loss: 40.2  loss_cls_b1: 4.654  loss_cls_b2: 4.685  loss_cls_b21: 4.76  loss_cls_b22: 4.96  loss_cls_b3: 4.681  loss_cls_b31: 4.834  loss_cls_b32: 4.914  loss_cls_b33: 4.944  loss_triplet_b1: 0.3332  loss_triplet_b2: 0.3124  loss_triplet_b3: 0.318  loss_triplet_b22: 0.3318  loss_triplet_b33: 0.3928  time: 0.2869  data_time: 0.0007  lr: 3.50e-04  max_mem: 9468M
[04/14 20:25:31 fastreid.utils.events]:  eta: 1:33:31  epoch/iter: 7/2063  total_loss: 39.22  loss_cls_b1: 4.574  loss_cls_b2: 4.579  loss_cls_b21: 4.686  loss_cls_b22: 4.861  loss_cls_b3: 4.554  loss_cls_b31: 4.696  loss_cls_b32: 4.859  loss_cls_b33: 4.91  loss_triplet_b1: 0.3113  loss_triplet_b2: 0.2917  loss_triplet_b3: 0.2928  loss_triplet_b22: 0.3262  loss_triplet_b33: 0.3472  time: 0.2910  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 20:26:27 fastreid.utils.events]:  eta: 1:32:31  epoch/iter: 8/2199  total_loss: 37.6  loss_cls_b1: 4.4  loss_cls_b2: 4.362  loss_cls_b21: 4.53  loss_cls_b22: 4.66  loss_cls_b3: 4.359  loss_cls_b31: 4.568  loss_cls_b32: 4.636  loss_cls_b33: 4.758  loss_triplet_b1: 0.2753  loss_triplet_b2: 0.2541  loss_triplet_b3: 0.2566  loss_triplet_b22: 0.2749  loss_triplet_b33: 0.2997  time: 0.2988  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 20:27:19 fastreid.utils.events]:  eta: 1:31:40  epoch/iter: 8/2321  total_loss: 37.12  loss_cls_b1: 4.348  loss_cls_b2: 4.325  loss_cls_b21: 4.464  loss_cls_b22: 4.619  loss_cls_b3: 4.294  loss_cls_b31: 4.509  loss_cls_b32: 4.622  loss_cls_b33: 4.718  loss_triplet_b1: 0.2656  loss_triplet_b2: 0.2522  loss_triplet_b3: 0.2556  loss_triplet_b22: 0.2708  loss_triplet_b33: 0.2914  time: 0.3052  data_time: 0.0007  lr: 3.50e-04  max_mem: 9468M
[04/14 20:27:51 fastreid.utils.events]:  eta: 1:31:07  epoch/iter: 9/2399  total_loss: 36.23  loss_cls_b1: 4.228  loss_cls_b2: 4.164  loss_cls_b21: 4.343  loss_cls_b22: 4.542  loss_cls_b3: 4.165  loss_cls_b31: 4.404  loss_cls_b32: 4.502  loss_cls_b33: 4.611  loss_triplet_b1: 0.272  loss_triplet_b2: 0.2469  loss_triplet_b3: 0.2366  loss_triplet_b22: 0.2529  loss_triplet_b33: 0.2847  time: 0.3089  data_time: 0.0014  lr: 3.50e-04  max_mem: 9468M
[04/14 20:29:07 fastreid.engine.defaults]: Prepare testing set
[04/14 20:29:07 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| query    | 702     | 2228       | 8           |
| gallery  | 1110    | 17661      | 8           |
[04/14 20:29:07 fastreid.evaluation.evaluator]: Start inference on 19889 images
[04/14 20:29:18 fastreid.evaluation.evaluator]: Inference done 11/156. 0.0330 s / batch. ETA=0:00:21
[04/14 20:29:46 fastreid.evaluation.evaluator]: Total inference time: 0:00:29.199098 (0.193372 s / batch per device)
[04/14 20:29:46 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:07 (0.050162 s / batch per device)
[04/14 20:29:57 fastreid.evaluation.testing]: Evaluation results in csv format: 
| Datasets   | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
|:-----------|:---------|:---------|:----------|:------|:-------|:---------|
| DukeMTMC   | 72.80    | 83.26    | 87.57     | 54.93 | 14.24  | 63.86    |
[04/14 20:29:57 fastreid.utils.events]:  eta: 1:29:54  epoch/iter: 9/2579  total_loss: 34.76  loss_cls_b1: 4.053  loss_cls_b2: 4.004  loss_cls_b21: 4.181  loss_cls_b22: 4.332  loss_cls_b3: 3.98  loss_cls_b31: 4.295  loss_cls_b32: 4.383  loss_cls_b33: 4.447  loss_triplet_b1: 0.2407  loss_triplet_b2: 0.21  loss_triplet_b3: 0.2155  loss_triplet_b22: 0.2174  loss_triplet_b33: 0.241  time: 0.3167  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 20:30:06 fastreid.utils.events]:  eta: 1:29:46  epoch/iter: 10/2599  total_loss: 34.8  loss_cls_b1: 4.025  loss_cls_b2: 4.004  loss_cls_b21: 4.177  loss_cls_b22: 4.326  loss_cls_b3: 3.979  loss_cls_b31: 4.305  loss_cls_b32: 4.343  loss_cls_b33: 4.427  loss_triplet_b1: 0.2353  loss_triplet_b2: 0.2036  loss_triplet_b3: 0.2102  loss_triplet_b22: 0.2126  loss_triplet_b33: 0.2412  time: 0.3177  data_time: 0.0014  lr: 3.50e-04  max_mem: 9468M
[04/14 20:31:30 fastreid.utils.events]:  eta: 1:28:19  epoch/iter: 10/2799  total_loss: 33.22  loss_cls_b1: 3.889  loss_cls_b2: 3.801  loss_cls_b21: 4.057  loss_cls_b22: 4.231  loss_cls_b3: 3.811  loss_cls_b31: 4.081  loss_cls_b32: 4.19  loss_cls_b33: 4.331  loss_triplet_b1: 0.1994  loss_triplet_b2: 0.1825  loss_triplet_b3: 0.1818  loss_triplet_b22: 0.201  loss_triplet_b33: 0.2192  time: 0.3249  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 20:31:46 fastreid.utils.events]:  eta: 1:28:03  epoch/iter: 10/2837  total_loss: 33.16  loss_cls_b1: 3.871  loss_cls_b2: 3.784  loss_cls_b21: 4.03  loss_cls_b22: 4.184  loss_cls_b3: 3.811  loss_cls_b31: 4.081  loss_cls_b32: 4.202  loss_cls_b33: 4.303  loss_triplet_b1: 0.2114  loss_triplet_b2: 0.186  loss_triplet_b3: 0.1834  loss_triplet_b22: 0.2027  loss_triplet_b33: 0.2263  time: 0.3262  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 20:32:54 fastreid.utils.events]:  eta: 1:26:55  epoch/iter: 11/2999  total_loss: 32.93  loss_cls_b1: 3.749  loss_cls_b2: 3.69  loss_cls_b21: 3.94  loss_cls_b22: 4.171  loss_cls_b3: 3.723  loss_cls_b31: 4.004  loss_cls_b32: 4.13  loss_cls_b33: 4.268  loss_triplet_b1: 0.2081  loss_triplet_b2: 0.1814  loss_triplet_b3: 0.1959  loss_triplet_b22: 0.194  loss_triplet_b33: 0.227  time: 0.3312  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:33:34 fastreid.utils.events]:  eta: 1:26:15  epoch/iter: 11/3095  total_loss: 31.89  loss_cls_b1: 3.691  loss_cls_b2: 3.552  loss_cls_b21: 3.901  loss_cls_b22: 4.044  loss_cls_b3: 3.609  loss_cls_b31: 3.943  loss_cls_b32: 4.013  loss_cls_b33: 4.182  loss_triplet_b1: 0.1933  loss_triplet_b2: 0.1648  loss_triplet_b3: 0.1811  loss_triplet_b22: 0.1756  loss_triplet_b33: 0.1976  time: 0.3339  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 20:34:17 fastreid.utils.events]:  eta: 1:25:32  epoch/iter: 12/3199  total_loss: 31.21  loss_cls_b1: 3.62  loss_cls_b2: 3.532  loss_cls_b21: 3.865  loss_cls_b22: 3.958  loss_cls_b3: 3.521  loss_cls_b31: 3.907  loss_cls_b32: 3.965  loss_cls_b33: 4.139  loss_triplet_b1: 0.1706  loss_triplet_b2: 0.1523  loss_triplet_b3: 0.1453  loss_triplet_b22: 0.1472  loss_triplet_b33: 0.1742  time: 0.3367  data_time: 0.0016  lr: 3.50e-04  max_mem: 9468M
[04/14 20:35:22 fastreid.utils.events]:  eta: 1:24:27  epoch/iter: 12/3353  total_loss: 30.56  loss_cls_b1: 3.522  loss_cls_b2: 3.442  loss_cls_b21: 3.725  loss_cls_b22: 3.909  loss_cls_b3: 3.456  loss_cls_b31: 3.816  loss_cls_b32: 3.937  loss_cls_b33: 4.056  loss_triplet_b1: 0.1577  loss_triplet_b2: 0.1406  loss_triplet_b3: 0.1264  loss_triplet_b22: 0.1463  loss_triplet_b33: 0.1552  time: 0.3404  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 20:35:41 fastreid.utils.events]:  eta: 1:24:08  epoch/iter: 13/3399  total_loss: 30.52  loss_cls_b1: 3.507  loss_cls_b2: 3.415  loss_cls_b21: 3.707  loss_cls_b22: 3.902  loss_cls_b3: 3.45  loss_cls_b31: 3.782  loss_cls_b32: 3.923  loss_cls_b33: 4.04  loss_triplet_b1: 0.1583  loss_triplet_b2: 0.1418  loss_triplet_b3: 0.1301  loss_triplet_b22: 0.1508  loss_triplet_b33: 0.1617  time: 0.3416  data_time: 0.0013  lr: 3.50e-04  max_mem: 9468M
[04/14 20:37:05 fastreid.utils.events]:  eta: 1:22:37  epoch/iter: 13/3599  total_loss: 30.05  loss_cls_b1: 3.451  loss_cls_b2: 3.371  loss_cls_b21: 3.651  loss_cls_b22: 3.815  loss_cls_b3: 3.374  loss_cls_b31: 3.747  loss_cls_b32: 3.816  loss_cls_b33: 3.948  loss_triplet_b1: 0.1505  loss_triplet_b2: 0.1364  loss_triplet_b3: 0.1371  loss_triplet_b22: 0.142  loss_triplet_b33: 0.1355  time: 0.3457  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:37:10 fastreid.utils.events]:  eta: 1:22:31  epoch/iter: 13/3611  total_loss: 29.72  loss_cls_b1: 3.423  loss_cls_b2: 3.343  loss_cls_b21: 3.648  loss_cls_b22: 3.79  loss_cls_b3: 3.334  loss_cls_b31: 3.701  loss_cls_b32: 3.784  loss_cls_b33: 3.9  loss_triplet_b1: 0.1489  loss_triplet_b2: 0.1315  loss_triplet_b3: 0.1351  loss_triplet_b22: 0.1404  loss_triplet_b33: 0.1277  time: 0.3460  data_time: 0.0016  lr: 3.50e-04  max_mem: 9468M
[04/14 20:38:28 fastreid.utils.events]:  eta: 1:21:10  epoch/iter: 14/3799  total_loss: 29.88  loss_cls_b1: 3.432  loss_cls_b2: 3.382  loss_cls_b21: 3.615  loss_cls_b22: 3.792  loss_cls_b3: 3.307  loss_cls_b31: 3.7  loss_cls_b32: 3.812  loss_cls_b33: 3.938  loss_triplet_b1: 0.1552  loss_triplet_b2: 0.1333  loss_triplet_b3: 0.1359  loss_triplet_b22: 0.1413  loss_triplet_b33: 0.1455  time: 0.3495  data_time: 0.0008  lr: 3.50e-04  max_mem: 9468M
[04/14 20:38:57 fastreid.utils.events]:  eta: 1:20:39  epoch/iter: 14/3869  total_loss: 29.38  loss_cls_b1: 3.357  loss_cls_b2: 3.317  loss_cls_b21: 3.516  loss_cls_b22: 3.707  loss_cls_b3: 3.253  loss_cls_b31: 3.635  loss_cls_b32: 3.742  loss_cls_b33: 3.872  loss_triplet_b1: 0.147  loss_triplet_b2: 0.124  loss_triplet_b3: 0.1262  loss_triplet_b22: 0.1348  loss_triplet_b33: 0.1412  time: 0.3507  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:39:52 fastreid.utils.events]:  eta: 1:19:42  epoch/iter: 15/3999  total_loss: 28.69  loss_cls_b1: 3.291  loss_cls_b2: 3.212  loss_cls_b21: 3.492  loss_cls_b22: 3.671  loss_cls_b3: 3.217  loss_cls_b31: 3.623  loss_cls_b32: 3.695  loss_cls_b33: 3.793  loss_triplet_b1: 0.1292  loss_triplet_b2: 0.1205  loss_triplet_b3: 0.1133  loss_triplet_b22: 0.1265  loss_triplet_b33: 0.1226  time: 0.3530  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:40:46 fastreid.utils.events]:  eta: 1:18:50  epoch/iter: 15/4127  total_loss: 28.53  loss_cls_b1: 3.253  loss_cls_b2: 3.124  loss_cls_b21: 3.442  loss_cls_b22: 3.624  loss_cls_b3: 3.154  loss_cls_b31: 3.543  loss_cls_b32: 3.645  loss_cls_b33: 3.788  loss_triplet_b1: 0.1249  loss_triplet_b2: 0.1147  loss_triplet_b3: 0.1152  loss_triplet_b22: 0.1162  loss_triplet_b33: 0.1291  time: 0.3550  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:41:16 fastreid.utils.events]:  eta: 1:18:18  epoch/iter: 16/4199  total_loss: 27.84  loss_cls_b1: 3.196  loss_cls_b2: 3.099  loss_cls_b21: 3.429  loss_cls_b22: 3.583  loss_cls_b3: 3.094  loss_cls_b31: 3.501  loss_cls_b32: 3.586  loss_cls_b33: 3.759  loss_triplet_b1: 0.1203  loss_triplet_b2: 0.1025  loss_triplet_b3: 0.1056  loss_triplet_b22: 0.1059  loss_triplet_b33: 0.1184  time: 0.3561  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:42:33 fastreid.utils.events]:  eta: 1:16:56  epoch/iter: 16/4385  total_loss: 27.46  loss_cls_b1: 3.173  loss_cls_b2: 3.07  loss_cls_b21: 3.356  loss_cls_b22: 3.511  loss_cls_b3: 3.042  loss_cls_b31: 3.458  loss_cls_b32: 3.607  loss_cls_b33: 3.703  loss_triplet_b1: 0.1241  loss_triplet_b2: 0.1087  loss_triplet_b3: 0.1032  loss_triplet_b22: 0.1103  loss_triplet_b33: 0.1112  time: 0.3586  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:42:39 fastreid.utils.events]:  eta: 1:16:49  epoch/iter: 17/4399  total_loss: 27.58  loss_cls_b1: 3.184  loss_cls_b2: 3.066  loss_cls_b21: 3.353  loss_cls_b22: 3.507  loss_cls_b3: 3.066  loss_cls_b31: 3.456  loss_cls_b32: 3.606  loss_cls_b33: 3.698  loss_triplet_b1: 0.1275  loss_triplet_b2: 0.1101  loss_triplet_b3: 0.1037  loss_triplet_b22: 0.1073  loss_triplet_b33: 0.1076  time: 0.3588  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:44:02 fastreid.utils.events]:  eta: 1:15:26  epoch/iter: 17/4599  total_loss: 26.8  loss_cls_b1: 3.098  loss_cls_b2: 3.009  loss_cls_b21: 3.252  loss_cls_b22: 3.481  loss_cls_b3: 2.982  loss_cls_b31: 3.328  loss_cls_b32: 3.452  loss_cls_b33: 3.599  loss_triplet_b1: 0.1071  loss_triplet_b2: 0.09554  loss_triplet_b3: 0.08605  loss_triplet_b22: 0.0944  loss_triplet_b33: 0.08607  time: 0.3613  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 20:44:21 fastreid.utils.events]:  eta: 1:15:11  epoch/iter: 17/4643  total_loss: 26.91  loss_cls_b1: 3.117  loss_cls_b2: 3.011  loss_cls_b21: 3.273  loss_cls_b22: 3.509  loss_cls_b3: 3.008  loss_cls_b31: 3.364  loss_cls_b32: 3.46  loss_cls_b33: 3.657  loss_triplet_b1: 0.1041  loss_triplet_b2: 0.09499  loss_triplet_b3: 0.08535  loss_triplet_b22: 0.08797  loss_triplet_b33: 0.09537  time: 0.3620  data_time: 0.0005  lr: 3.50e-04  max_mem: 9468M
[04/14 20:45:26 fastreid.utils.events]:  eta: 1:14:06  epoch/iter: 18/4799  total_loss: 26.31  loss_cls_b1: 3.004  loss_cls_b2: 2.909  loss_cls_b21: 3.235  loss_cls_b22: 3.387  loss_cls_b3: 2.941  loss_cls_b31: 3.322  loss_cls_b32: 3.411  loss_cls_b33: 3.536  loss_triplet_b1: 0.1031  loss_triplet_b2: 0.08776  loss_triplet_b3: 0.08137  loss_triplet_b22: 0.09243  loss_triplet_b33: 0.09934  time: 0.3638  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:46:09 fastreid.utils.events]:  eta: 1:13:24  epoch/iter: 18/4901  total_loss: 25.8  loss_cls_b1: 2.941  loss_cls_b2: 2.875  loss_cls_b21: 3.142  loss_cls_b22: 3.342  loss_cls_b3: 2.849  loss_cls_b31: 3.231  loss_cls_b32: 3.353  loss_cls_b33: 3.45  loss_triplet_b1: 0.09073  loss_triplet_b2: 0.07698  loss_triplet_b3: 0.07547  loss_triplet_b22: 0.08289  loss_triplet_b33: 0.08541  time: 0.3650  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:46:51 fastreid.utils.events]:  eta: 1:12:44  epoch/iter: 19/4999  total_loss: 25.91  loss_cls_b1: 2.989  loss_cls_b2: 2.886  loss_cls_b21: 3.164  loss_cls_b22: 3.345  loss_cls_b3: 2.843  loss_cls_b31: 3.256  loss_cls_b32: 3.358  loss_cls_b33: 3.503  loss_triplet_b1: 0.1041  loss_triplet_b2: 0.08473  loss_triplet_b3: 0.07878  loss_triplet_b22: 0.08878  loss_triplet_b33: 0.0912  time: 0.3661  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:47:58 fastreid.engine.defaults]: Prepare testing set
[04/14 20:47:58 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| query    | 702     | 2228       | 8           |
| gallery  | 1110    | 17661      | 8           |
[04/14 20:47:58 fastreid.evaluation.evaluator]: Start inference on 19889 images
[04/14 20:48:06 fastreid.evaluation.evaluator]: Inference done 11/156. 0.0269 s / batch. ETA=0:00:21
[04/14 20:48:33 fastreid.evaluation.evaluator]: Total inference time: 0:00:27.478370 (0.181976 s / batch per device)
[04/14 20:48:33 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.036823 s / batch per device)
[04/14 20:48:43 fastreid.evaluation.testing]: Evaluation results in csv format: 
| Datasets   | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
|:-----------|:---------|:---------|:----------|:------|:-------|:---------|
| DukeMTMC   | 78.73    | 87.75    | 90.93     | 64.27 | 22.81  | 71.50    |
[04/14 20:48:43 fastreid.utils.checkpoint]: Saving checkpoint to logs/dukemtmc/mgn_R50-ibn/model_best.pth
[04/14 20:48:45 fastreid.utils.checkpoint]: Saving checkpoint to logs/dukemtmc/mgn_R50-ibn/model_0019.pth
[04/14 20:48:46 fastreid.utils.events]:  eta: 1:11:41  epoch/iter: 19/5159  total_loss: 26.02  loss_cls_b1: 2.972  loss_cls_b2: 2.847  loss_cls_b21: 3.122  loss_cls_b22: 3.362  loss_cls_b3: 2.875  loss_cls_b31: 3.254  loss_cls_b32: 3.3  loss_cls_b33: 3.508  loss_triplet_b1: 0.1128  loss_triplet_b2: 0.08383  loss_triplet_b3: 0.08013  loss_triplet_b22: 0.0878  loss_triplet_b33: 0.08811  time: 0.3678  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 20:49:03 fastreid.utils.events]:  eta: 1:11:26  epoch/iter: 20/5199  total_loss: 25.79  loss_cls_b1: 2.941  loss_cls_b2: 2.826  loss_cls_b21: 3.122  loss_cls_b22: 3.348  loss_cls_b3: 2.844  loss_cls_b31: 3.233  loss_cls_b32: 3.284  loss_cls_b33: 3.449  loss_triplet_b1: 0.104  loss_triplet_b2: 0.08073  loss_triplet_b3: 0.07855  loss_triplet_b22: 0.08297  loss_triplet_b33: 0.07996  time: 0.3682  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:50:27 fastreid.utils.events]:  eta: 1:10:06  epoch/iter: 20/5399  total_loss: 24.84  loss_cls_b1: 2.871  loss_cls_b2: 2.747  loss_cls_b21: 3.025  loss_cls_b22: 3.27  loss_cls_b3: 2.721  loss_cls_b31: 3.128  loss_cls_b32: 3.164  loss_cls_b33: 3.413  loss_triplet_b1: 0.09735  loss_triplet_b2: 0.07287  loss_triplet_b3: 0.07615  loss_triplet_b22: 0.07392  loss_triplet_b33: 0.07272  time: 0.3700  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:50:34 fastreid.utils.events]:  eta: 1:09:58  epoch/iter: 20/5417  total_loss: 24.96  loss_cls_b1: 2.891  loss_cls_b2: 2.749  loss_cls_b21: 3.021  loss_cls_b22: 3.271  loss_cls_b3: 2.726  loss_cls_b31: 3.142  loss_cls_b32: 3.184  loss_cls_b33: 3.413  loss_triplet_b1: 0.1001  loss_triplet_b2: 0.07571  loss_triplet_b3: 0.0768  loss_triplet_b22: 0.07796  loss_triplet_b33: 0.07638  time: 0.3701  data_time: 0.0013  lr: 3.50e-04  max_mem: 9468M
[04/14 20:51:50 fastreid.utils.events]:  eta: 1:08:43  epoch/iter: 21/5599  total_loss: 24.66  loss_cls_b1: 2.884  loss_cls_b2: 2.707  loss_cls_b21: 3.009  loss_cls_b22: 3.248  loss_cls_b3: 2.7  loss_cls_b31: 3.111  loss_cls_b32: 3.23  loss_cls_b33: 3.404  loss_triplet_b1: 0.09451  loss_triplet_b2: 0.07376  loss_triplet_b3: 0.0702  loss_triplet_b22: 0.07234  loss_triplet_b33: 0.07602  time: 0.3717  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 20:52:22 fastreid.utils.events]:  eta: 1:08:07  epoch/iter: 21/5675  total_loss: 24.81  loss_cls_b1: 2.889  loss_cls_b2: 2.729  loss_cls_b21: 3.024  loss_cls_b22: 3.256  loss_cls_b3: 2.74  loss_cls_b31: 3.111  loss_cls_b32: 3.23  loss_cls_b33: 3.408  loss_triplet_b1: 0.1024  loss_triplet_b2: 0.07386  loss_triplet_b3: 0.07477  loss_triplet_b22: 0.07387  loss_triplet_b33: 0.08307  time: 0.3723  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 20:53:13 fastreid.utils.events]:  eta: 1:07:14  epoch/iter: 22/5799  total_loss: 24.33  loss_cls_b1: 2.812  loss_cls_b2: 2.698  loss_cls_b21: 2.984  loss_cls_b22: 3.177  loss_cls_b3: 2.675  loss_cls_b31: 3.094  loss_cls_b32: 3.196  loss_cls_b33: 3.326  loss_triplet_b1: 0.09066  loss_triplet_b2: 0.06949  loss_triplet_b3: 0.06813  loss_triplet_b22: 0.06284  loss_triplet_b33: 0.07858  time: 0.3732  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:54:09 fastreid.utils.events]:  eta: 1:06:16  epoch/iter: 22/5933  total_loss: 23.95  loss_cls_b1: 2.777  loss_cls_b2: 2.646  loss_cls_b21: 2.933  loss_cls_b22: 3.141  loss_cls_b3: 2.659  loss_cls_b31: 3.065  loss_cls_b32: 3.16  loss_cls_b33: 3.262  loss_triplet_b1: 0.09667  loss_triplet_b2: 0.06993  loss_triplet_b3: 0.06246  loss_triplet_b22: 0.06605  loss_triplet_b33: 0.07  time: 0.3742  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:54:37 fastreid.utils.events]:  eta: 1:05:48  epoch/iter: 23/5999  total_loss: 24.1  loss_cls_b1: 2.778  loss_cls_b2: 2.644  loss_cls_b21: 2.931  loss_cls_b22: 3.155  loss_cls_b3: 2.642  loss_cls_b31: 3.063  loss_cls_b32: 3.114  loss_cls_b33: 3.263  loss_triplet_b1: 0.09548  loss_triplet_b2: 0.06994  loss_triplet_b3: 0.06459  loss_triplet_b22: 0.07271  loss_triplet_b33: 0.07395  time: 0.3747  data_time: 0.0013  lr: 3.50e-04  max_mem: 9468M
[04/14 20:55:57 fastreid.utils.events]:  eta: 1:04:26  epoch/iter: 23/6191  total_loss: 23.95  loss_cls_b1: 2.768  loss_cls_b2: 2.637  loss_cls_b21: 2.926  loss_cls_b22: 3.106  loss_cls_b3: 2.626  loss_cls_b31: 3.003  loss_cls_b32: 3.083  loss_cls_b33: 3.262  loss_triplet_b1: 0.0987  loss_triplet_b2: 0.06632  loss_triplet_b3: 0.07206  loss_triplet_b22: 0.07609  loss_triplet_b33: 0.08385  time: 0.3760  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:56:01 fastreid.utils.events]:  eta: 1:04:22  epoch/iter: 24/6199  total_loss: 23.9  loss_cls_b1: 2.757  loss_cls_b2: 2.634  loss_cls_b21: 2.926  loss_cls_b22: 3.084  loss_cls_b3: 2.617  loss_cls_b31: 2.991  loss_cls_b32: 3.083  loss_cls_b33: 3.25  loss_triplet_b1: 0.0976  loss_triplet_b2: 0.06458  loss_triplet_b3: 0.07029  loss_triplet_b22: 0.07103  loss_triplet_b33: 0.08285  time: 0.3761  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 20:57:25 fastreid.utils.events]:  eta: 1:03:02  epoch/iter: 24/6399  total_loss: 23.45  loss_cls_b1: 2.725  loss_cls_b2: 2.592  loss_cls_b21: 2.876  loss_cls_b22: 3.132  loss_cls_b3: 2.596  loss_cls_b31: 2.956  loss_cls_b32: 3.054  loss_cls_b33: 3.232  loss_triplet_b1: 0.08341  loss_triplet_b2: 0.06076  loss_triplet_b3: 0.05461  loss_triplet_b22: 0.05561  loss_triplet_b33: 0.0642  time: 0.3775  data_time: 0.0015  lr: 3.50e-04  max_mem: 9468M
[04/14 20:57:46 fastreid.utils.events]:  eta: 1:02:42  epoch/iter: 24/6449  total_loss: 23.3  loss_cls_b1: 2.715  loss_cls_b2: 2.563  loss_cls_b21: 2.771  loss_cls_b22: 3.137  loss_cls_b3: 2.542  loss_cls_b31: 2.919  loss_cls_b32: 3.038  loss_cls_b33: 3.256  loss_triplet_b1: 0.08506  loss_triplet_b2: 0.06013  loss_triplet_b3: 0.0506  loss_triplet_b22: 0.05195  loss_triplet_b33: 0.05618  time: 0.3778  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:58:49 fastreid.utils.events]:  eta: 1:01:38  epoch/iter: 25/6599  total_loss: 23.07  loss_cls_b1: 2.686  loss_cls_b2: 2.527  loss_cls_b21: 2.8  loss_cls_b22: 3.013  loss_cls_b3: 2.516  loss_cls_b31: 2.938  loss_cls_b32: 3.006  loss_cls_b33: 3.179  loss_triplet_b1: 0.0875  loss_triplet_b2: 0.05743  loss_triplet_b3: 0.05389  loss_triplet_b22: 0.05969  loss_triplet_b33: 0.06237  time: 0.3787  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 20:59:34 fastreid.utils.events]:  eta: 1:00:57  epoch/iter: 25/6707  total_loss: 22.74  loss_cls_b1: 2.646  loss_cls_b2: 2.48  loss_cls_b21: 2.769  loss_cls_b22: 2.974  loss_cls_b3: 2.467  loss_cls_b31: 2.932  loss_cls_b32: 2.952  loss_cls_b33: 3.104  loss_triplet_b1: 0.07573  loss_triplet_b2: 0.05705  loss_triplet_b3: 0.0492  loss_triplet_b22: 0.05784  loss_triplet_b33: 0.05864  time: 0.3794  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 21:00:13 fastreid.utils.events]:  eta: 1:00:22  epoch/iter: 26/6799  total_loss: 22.66  loss_cls_b1: 2.635  loss_cls_b2: 2.46  loss_cls_b21: 2.73  loss_cls_b22: 2.984  loss_cls_b3: 2.455  loss_cls_b31: 2.856  loss_cls_b32: 2.94  loss_cls_b33: 3.13  loss_triplet_b1: 0.06672  loss_triplet_b2: 0.0559  loss_triplet_b3: 0.04402  loss_triplet_b22: 0.04966  loss_triplet_b33: 0.04582  time: 0.3799  data_time: 0.0012  lr: 3.50e-04  max_mem: 9468M
[04/14 21:01:22 fastreid.utils.events]:  eta: 0:59:15  epoch/iter: 26/6965  total_loss: 22.39  loss_cls_b1: 2.605  loss_cls_b2: 2.459  loss_cls_b21: 2.724  loss_cls_b22: 2.957  loss_cls_b3: 2.438  loss_cls_b31: 2.818  loss_cls_b32: 2.959  loss_cls_b33: 3.088  loss_triplet_b1: 0.05887  loss_triplet_b2: 0.04885  loss_triplet_b3: 0.04383  loss_triplet_b22: 0.04077  loss_triplet_b33: 0.04435  time: 0.3809  data_time: 0.0007  lr: 3.50e-04  max_mem: 9468M
[04/14 21:01:36 fastreid.utils.events]:  eta: 0:59:00  epoch/iter: 27/6999  total_loss: 22.25  loss_cls_b1: 2.586  loss_cls_b2: 2.459  loss_cls_b21: 2.719  loss_cls_b22: 2.956  loss_cls_b3: 2.421  loss_cls_b31: 2.801  loss_cls_b32: 2.935  loss_cls_b33: 3.058  loss_triplet_b1: 0.06007  loss_triplet_b2: 0.04853  loss_triplet_b3: 0.04311  loss_triplet_b22: 0.04132  loss_triplet_b33: 0.04599  time: 0.3811  data_time: 0.0009  lr: 3.50e-04  max_mem: 9468M
[04/14 21:03:00 fastreid.utils.events]:  eta: 0:57:40  epoch/iter: 27/7199  total_loss: 22.27  loss_cls_b1: 2.557  loss_cls_b2: 2.405  loss_cls_b21: 2.728  loss_cls_b22: 2.96  loss_cls_b3: 2.416  loss_cls_b31: 2.816  loss_cls_b32: 2.902  loss_cls_b33: 3.057  loss_triplet_b1: 0.07624  loss_triplet_b2: 0.055  loss_triplet_b3: 0.04766  loss_triplet_b22: 0.05022  loss_triplet_b33: 0.04793  time: 0.3821  data_time: 0.0007  lr: 3.50e-04  max_mem: 9468M
[04/14 21:03:11 fastreid.utils.events]:  eta: 0:57:31  epoch/iter: 27/7223  total_loss: 22.03  loss_cls_b1: 2.541  loss_cls_b2: 2.393  loss_cls_b21: 2.728  loss_cls_b22: 2.921  loss_cls_b3: 2.393  loss_cls_b31: 2.789  loss_cls_b32: 2.896  loss_cls_b33: 3.048  loss_triplet_b1: 0.07514  loss_triplet_b2: 0.05474  loss_triplet_b3: 0.04796  loss_triplet_b22: 0.04985  loss_triplet_b33: 0.04937  time: 0.3823  data_time: 0.0011  lr: 3.50e-04  max_mem: 9468M
[04/14 21:04:24 fastreid.utils.events]:  eta: 0:56:19  epoch/iter: 28/7399  total_loss: 22.08  loss_cls_b1: 2.575  loss_cls_b2: 2.427  loss_cls_b21: 2.73  loss_cls_b22: 2.928  loss_cls_b3: 2.389  loss_cls_b31: 2.784  loss_cls_b32: 2.901  loss_cls_b33: 3.036  loss_triplet_b1: 0.07292  loss_triplet_b2: 0.05178  loss_triplet_b3: 0.04522  loss_triplet_b22: 0.04841  loss_triplet_b33: 0.04854  time: 0.3831  data_time: 0.0007  lr: 3.50e-04  max_mem: 9468M
[04/14 21:04:59 fastreid.utils.events]:  eta: 0:55:45  epoch/iter: 28/7481  total_loss: 21.82  loss_cls_b1: 2.542  loss_cls_b2: 2.411  loss_cls_b21: 2.656  loss_cls_b22: 2.886  loss_cls_b3: 2.384  loss_cls_b31: 2.712  loss_cls_b32: 2.843  loss_cls_b33: 2.993  loss_triplet_b1: 0.06163  loss_triplet_b2: 0.04864  loss_triplet_b3: 0.03906  loss_triplet_b22: 0.04114  loss_triplet_b33: 0.04225  time: 0.3835  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 21:05:48 fastreid.utils.events]:  eta: 0:54:57  epoch/iter: 29/7599  total_loss: 21.6  loss_cls_b1: 2.508  loss_cls_b2: 2.367  loss_cls_b21: 2.632  loss_cls_b22: 2.84  loss_cls_b3: 2.364  loss_cls_b31: 2.712  loss_cls_b32: 2.779  loss_cls_b33: 2.925  loss_triplet_b1: 0.06389  loss_triplet_b2: 0.04705  loss_triplet_b3: 0.03986  loss_triplet_b22: 0.04599  loss_triplet_b33: 0.04289  time: 0.3841  data_time: 0.0008  lr: 3.50e-04  max_mem: 9468M
[04/14 21:06:47 fastreid.engine.defaults]: Prepare testing set
[04/14 21:06:47 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| query    | 702     | 2228       | 8           |
| gallery  | 1110    | 17661      | 8           |
[04/14 21:06:47 fastreid.evaluation.evaluator]: Start inference on 19889 images
[04/14 21:06:55 fastreid.evaluation.evaluator]: Inference done 11/156. 0.0388 s / batch. ETA=0:00:22
[04/14 21:07:22 fastreid.evaluation.evaluator]: Total inference time: 0:00:27.408503 (0.181513 s / batch per device)
[04/14 21:07:22 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.037013 s / batch per device)
[04/14 21:07:32 fastreid.evaluation.testing]: Evaluation results in csv format: 
| Datasets   | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
|:-----------|:---------|:---------|:----------|:------|:-------|:---------|
| DukeMTMC   | 82.63    | 91.20    | 93.18     | 69.87 | 28.49  | 76.25    |
[04/14 21:07:32 fastreid.utils.events]:  eta: 0:53:56  epoch/iter: 29/7739  total_loss: 20.97  loss_cls_b1: 2.431  loss_cls_b2: 2.296  loss_cls_b21: 2.547  loss_cls_b22: 2.76  loss_cls_b3: 2.267  loss_cls_b31: 2.629  loss_cls_b32: 2.698  loss_cls_b33: 2.873  loss_triplet_b1: 0.05823  loss_triplet_b2: 0.03917  loss_triplet_b3: 0.03845  loss_triplet_b22: 0.04228  loss_triplet_b33: 0.03961  time: 0.3847  data_time: 0.0010  lr: 3.50e-04  max_mem: 9468M
[04/14 21:07:57 fastreid.utils.events]:  eta: 0:53:30  epoch/iter: 30/7799  total_loss: 21.02  loss_cls_b1: 2.461  loss_cls_b2: 2.293  loss_cls_b21: 2.533  loss_cls_b22: 2.816  loss_cls_b3: 2.271  loss_cls_b31: 2.63  loss_cls_b32: 2.74  loss_cls_b33: 2.915  loss_triplet_b1: 0.06151  loss_triplet_b2: 0.03829  loss_triplet_b3: 0.0371  loss_triplet_b22: 0.04057  loss_triplet_b33: 0.0381  time: 0.3850  data_time: 0.0009  lr: 3.49e-04  max_mem: 9468M
[04/14 21:09:20 fastreid.utils.events]:  eta: 0:52:07  epoch/iter: 30/7997  total_loss: 20.98  loss_cls_b1: 2.446  loss_cls_b2: 2.298  loss_cls_b21: 2.53  loss_cls_b22: 2.826  loss_cls_b3: 2.278  loss_cls_b31: 2.662  loss_cls_b32: 2.742  loss_cls_b33: 2.959  loss_triplet_b1: 0.06346  loss_triplet_b2: 0.04568  loss_triplet_b3: 0.03802  loss_triplet_b22: 0.04396  loss_triplet_b33: 0.04095  time: 0.3858  data_time: 0.0008  lr: 3.49e-04  max_mem: 9468M
[04/14 21:09:21 fastreid.utils.events]:  eta: 0:52:07  epoch/iter: 31/7999  total_loss: 20.98  loss_cls_b1: 2.446  loss_cls_b2: 2.301  loss_cls_b21: 2.53  loss_cls_b22: 2.823  loss_cls_b3: 2.278  loss_cls_b31: 2.662  loss_cls_b32: 2.742  loss_cls_b33: 2.959  loss_triplet_b1: 0.06269  loss_triplet_b2: 0.04422  loss_triplet_b3: 0.03769  loss_triplet_b22: 0.04247  loss_triplet_b33: 0.04027  time: 0.3858  data_time: 0.0008  lr: 3.46e-04  max_mem: 9468M
[04/14 21:10:44 fastreid.utils.events]:  eta: 0:50:41  epoch/iter: 31/8199  total_loss: 20.66  loss_cls_b1: 2.411  loss_cls_b2: 2.285  loss_cls_b21: 2.503  loss_cls_b22: 2.794  loss_cls_b3: 2.241  loss_cls_b31: 2.577  loss_cls_b32: 2.716  loss_cls_b33: 2.909  loss_triplet_b1: 0.05371  loss_triplet_b2: 0.0393  loss_triplet_b3: 0.03275  loss_triplet_b22: 0.04002  loss_triplet_b33: 0.04173  time: 0.3866  data_time: 0.0009  lr: 3.46e-04  max_mem: 9468M
[04/14 21:11:08 fastreid.utils.events]:  eta: 0:50:17  epoch/iter: 31/8255  total_loss: 20.62  loss_cls_b1: 2.412  loss_cls_b2: 2.27  loss_cls_b21: 2.517  loss_cls_b22: 2.796  loss_cls_b3: 2.237  loss_cls_b31: 2.596  loss_cls_b32: 2.718  loss_cls_b33: 2.903  loss_triplet_b1: 0.05581  loss_triplet_b2: 0.04065  loss_triplet_b3: 0.03379  loss_triplet_b22: 0.04248  loss_triplet_b33: 0.04108  time: 0.3868  data_time: 0.0006  lr: 3.46e-04  max_mem: 9468M
[04/14 21:12:08 fastreid.utils.events]:  eta: 0:49:14  epoch/iter: 32/8399  total_loss: 20.19  loss_cls_b1: 2.363  loss_cls_b2: 2.185  loss_cls_b21: 2.416  loss_cls_b22: 2.765  loss_cls_b3: 2.192  loss_cls_b31: 2.56  loss_cls_b32: 2.639  loss_cls_b33: 2.832  loss_triplet_b1: 0.05264  loss_triplet_b2: 0.03539  loss_triplet_b3: 0.03572  loss_triplet_b22: 0.03479  loss_triplet_b33: 0.03263  time: 0.3873  data_time: 0.0009  lr: 3.41e-04  max_mem: 9468M
[04/14 21:12:56 fastreid.utils.events]:  eta: 0:48:27  epoch/iter: 32/8513  total_loss: 19.65  loss_cls_b1: 2.304  loss_cls_b2: 2.15  loss_cls_b21: 2.399  loss_cls_b22: 2.662  loss_cls_b3: 2.129  loss_cls_b31: 2.473  loss_cls_b32: 2.571  loss_cls_b33: 2.769  loss_triplet_b1: 0.04591  loss_triplet_b2: 0.03007  loss_triplet_b3: 0.02752  loss_triplet_b22: 0.03233  loss_triplet_b33: 0.02868  time: 0.3878  data_time: 0.0010  lr: 3.41e-04  max_mem: 9468M
[04/14 21:13:33 fastreid.utils.events]:  eta: 0:47:52  epoch/iter: 33/8599  total_loss: 19.82  loss_cls_b1: 2.344  loss_cls_b2: 2.188  loss_cls_b21: 2.467  loss_cls_b22: 2.655  loss_cls_b3: 2.161  loss_cls_b31: 2.525  loss_cls_b32: 2.613  loss_cls_b33: 2.763  loss_triplet_b1: 0.05095  loss_triplet_b2: 0.03087  loss_triplet_b3: 0.02844  loss_triplet_b22: 0.03221  loss_triplet_b33: 0.03335  time: 0.3882  data_time: 0.0010  lr: 3.35e-04  max_mem: 9468M
[04/14 21:14:45 fastreid.utils.events]:  eta: 0:46:46  epoch/iter: 33/8771  total_loss: 20.05  loss_cls_b1: 2.337  loss_cls_b2: 2.189  loss_cls_b21: 2.459  loss_cls_b22: 2.679  loss_cls_b3: 2.183  loss_cls_b31: 2.536  loss_cls_b32: 2.618  loss_cls_b33: 2.795  loss_triplet_b1: 0.04907  loss_triplet_b2: 0.03381  loss_triplet_b3: 0.03029  loss_triplet_b22: 0.03139  loss_triplet_b33: 0.02727  time: 0.3888  data_time: 0.0010  lr: 3.35e-04  max_mem: 9468M
[04/14 21:14:57 fastreid.utils.events]:  eta: 0:46:35  epoch/iter: 34/8799  total_loss: 19.65  loss_cls_b1: 2.317  loss_cls_b2: 2.169  loss_cls_b21: 2.416  loss_cls_b22: 2.657  loss_cls_b3: 2.166  loss_cls_b31: 2.512  loss_cls_b32: 2.603  loss_cls_b33: 2.792  loss_triplet_b1: 0.04739  loss_triplet_b2: 0.03257  loss_triplet_b3: 0.03029  loss_triplet_b22: 0.03139  loss_triplet_b33: 0.02727  time: 0.3889  data_time: 0.0009  lr: 3.27e-04  max_mem: 9468M
[04/14 21:16:20 fastreid.utils.events]:  eta: 0:45:09  epoch/iter: 34/8999  total_loss: 19.62  loss_cls_b1: 2.276  loss_cls_b2: 2.143  loss_cls_b21: 2.383  loss_cls_b22: 2.631  loss_cls_b3: 2.119  loss_cls_b31: 2.463  loss_cls_b32: 2.565  loss_cls_b33: 2.742  loss_triplet_b1: 0.04506  loss_triplet_b2: 0.03127  loss_triplet_b3: 0.02671  loss_triplet_b22: 0.0295  loss_triplet_b33: 0.02983  time: 0.3895  data_time: 0.0009  lr: 3.27e-04  max_mem: 9468M
[04/14 21:16:33 fastreid.utils.events]:  eta: 0:44:58  epoch/iter: 34/9029  total_loss: 19.59  loss_cls_b1: 2.273  loss_cls_b2: 2.136  loss_cls_b21: 2.383  loss_cls_b22: 2.601  loss_cls_b3: 2.108  loss_cls_b31: 2.453  loss_cls_b32: 2.551  loss_cls_b33: 2.723  loss_triplet_b1: 0.04602  loss_triplet_b2: 0.0318  loss_triplet_b3: 0.02899  loss_triplet_b22: 0.03195  loss_triplet_b33: 0.02914  time: 0.3896  data_time: 0.0008  lr: 3.27e-04  max_mem: 9468M
[04/14 21:17:44 fastreid.utils.events]:  eta: 0:43:48  epoch/iter: 35/9199  total_loss: 19.41  loss_cls_b1: 2.276  loss_cls_b2: 2.098  loss_cls_b21: 2.345  loss_cls_b22: 2.621  loss_cls_b3: 2.08  loss_cls_b31: 2.419  loss_cls_b32: 2.591  loss_cls_b33: 2.681  loss_triplet_b1: 0.04942  loss_triplet_b2: 0.03117  loss_triplet_b3: 0.02209  loss_triplet_b22: 0.03197  loss_triplet_b33: 0.02447  time: 0.3902  data_time: 0.0009  lr: 3.17e-04  max_mem: 9468M
[04/14 21:18:21 fastreid.utils.events]:  eta: 0:43:11  epoch/iter: 35/9287  total_loss: 19.44  loss_cls_b1: 2.285  loss_cls_b2: 2.132  loss_cls_b21: 2.321  loss_cls_b22: 2.641  loss_cls_b3: 2.1  loss_cls_b31: 2.419  loss_cls_b32: 2.565  loss_cls_b33: 2.737  loss_triplet_b1: 0.04624  loss_triplet_b2: 0.02785  loss_triplet_b3: 0.02191  loss_triplet_b22: 0.02982  loss_triplet_b33: 0.02432  time: 0.3904  data_time: 0.0008  lr: 3.17e-04  max_mem: 9468M
[04/14 21:19:08 fastreid.utils.events]:  eta: 0:42:24  epoch/iter: 36/9399  total_loss: 18.99  loss_cls_b1: 2.214  loss_cls_b2: 2.068  loss_cls_b21: 2.319  loss_cls_b22: 2.559  loss_cls_b3: 2.061  loss_cls_b31: 2.421  loss_cls_b32: 2.488  loss_cls_b33: 2.654  loss_triplet_b1: 0.04165  loss_triplet_b2: 0.02554  loss_triplet_b3: 0.02268  loss_triplet_b22: 0.02481  loss_triplet_b33: 0.02484  time: 0.3908  data_time: 0.0009  lr: 3.05e-04  max_mem: 9468M
[04/14 21:20:09 fastreid.utils.events]:  eta: 0:41:20  epoch/iter: 36/9545  total_loss: 18.72  loss_cls_b1: 2.173  loss_cls_b2: 2.022  loss_cls_b21: 2.234  loss_cls_b22: 2.535  loss_cls_b3: 2.026  loss_cls_b31: 2.329  loss_cls_b32: 2.458  loss_cls_b33: 2.638  loss_triplet_b1: 0.04144  loss_triplet_b2: 0.02518  loss_triplet_b3: 0.02268  loss_triplet_b22: 0.0261  loss_triplet_b33: 0.02413  time: 0.3912  data_time: 0.0008  lr: 3.05e-04  max_mem: 9468M
[04/14 21:20:31 fastreid.utils.events]:  eta: 0:40:56  epoch/iter: 37/9599  total_loss: 18.85  loss_cls_b1: 2.193  loss_cls_b2: 2.028  loss_cls_b21: 2.248  loss_cls_b22: 2.527  loss_cls_b3: 2.015  loss_cls_b31: 2.329  loss_cls_b32: 2.481  loss_cls_b33: 2.65  loss_triplet_b1: 0.04305  loss_triplet_b2: 0.02692  loss_triplet_b3: 0.02301  loss_triplet_b22: 0.02689  loss_triplet_b33: 0.02137  time: 0.3913  data_time: 0.0009  lr: 2.92e-04  max_mem: 9468M
[04/14 21:21:55 fastreid.utils.events]:  eta: 0:39:28  epoch/iter: 37/9799  total_loss: 18.87  loss_cls_b1: 2.157  loss_cls_b2: 2.022  loss_cls_b21: 2.268  loss_cls_b22: 2.556  loss_cls_b3: 1.996  loss_cls_b31: 2.33  loss_cls_b32: 2.468  loss_cls_b33: 2.622  loss_triplet_b1: 0.04236  loss_triplet_b2: 0.02517  loss_triplet_b3: 0.02247  loss_triplet_b22: 0.02646  loss_triplet_b33: 0.02418  time: 0.3919  data_time: 0.0009  lr: 2.92e-04  max_mem: 9468M
[04/14 21:21:56 fastreid.utils.events]:  eta: 0:39:26  epoch/iter: 37/9803  total_loss: 18.87  loss_cls_b1: 2.167  loss_cls_b2: 2.035  loss_cls_b21: 2.273  loss_cls_b22: 2.552  loss_cls_b3: 2.005  loss_cls_b31: 2.336  loss_cls_b32: 2.477  loss_cls_b33: 2.622  loss_triplet_b1: 0.04333  loss_triplet_b2: 0.02547  loss_triplet_b3: 0.02258  loss_triplet_b22: 0.02646  loss_triplet_b33: 0.02467  time: 0.3919  data_time: 0.0009  lr: 2.92e-04  max_mem: 9468M
[04/14 21:23:18 fastreid.utils.events]:  eta: 0:38:05  epoch/iter: 38/9999  total_loss: 18.13  loss_cls_b1: 2.128  loss_cls_b2: 1.985  loss_cls_b21: 2.224  loss_cls_b22: 2.433  loss_cls_b3: 1.949  loss_cls_b31: 2.261  loss_cls_b32: 2.359  loss_cls_b33: 2.54  loss_triplet_b1: 0.03938  loss_triplet_b2: 0.0226  loss_triplet_b3: 0.01731  loss_triplet_b22: 0.02011  loss_triplet_b33: 0.01469  time: 0.3923  data_time: 0.0011  lr: 2.78e-04  max_mem: 9468M
[04/14 21:23:44 fastreid.utils.events]:  eta: 0:37:37  epoch/iter: 38/10061  total_loss: 18.28  loss_cls_b1: 2.151  loss_cls_b2: 1.995  loss_cls_b21: 2.255  loss_cls_b22: 2.436  loss_cls_b3: 1.985  loss_cls_b31: 2.31  loss_cls_b32: 2.382  loss_cls_b33: 2.537  loss_triplet_b1: 0.04133  loss_triplet_b2: 0.0226  loss_triplet_b3: 0.01955  loss_triplet_b22: 0.02144  loss_triplet_b33: 0.02088  time: 0.3925  data_time: 0.0009  lr: 2.78e-04  max_mem: 9468M
[04/14 21:24:42 fastreid.utils.events]:  eta: 0:36:40  epoch/iter: 39/10199  total_loss: 17.77  loss_cls_b1: 2.101  loss_cls_b2: 1.965  loss_cls_b21: 2.154  loss_cls_b22: 2.446  loss_cls_b3: 1.933  loss_cls_b31: 2.242  loss_cls_b32: 2.303  loss_cls_b33: 2.529  loss_triplet_b1: 0.03399  loss_triplet_b2: 0.02185  loss_triplet_b3: 0.0183  loss_triplet_b22: 0.02007  loss_triplet_b33: 0.01712  time: 0.3928  data_time: 0.0010  lr: 2.63e-04  max_mem: 9468M
[04/14 21:25:32 fastreid.engine.defaults]: Prepare testing set
[04/14 21:25:32 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| query    | 702     | 2228       | 8           |
| gallery  | 1110    | 17661      | 8           |
[04/14 21:25:32 fastreid.evaluation.evaluator]: Start inference on 19889 images
[04/14 21:25:41 fastreid.evaluation.evaluator]: Inference done 11/156. 0.0390 s / batch. ETA=0:00:22
[04/14 21:26:07 fastreid.evaluation.evaluator]: Total inference time: 0:00:27.490745 (0.182058 s / batch per device)
[04/14 21:26:07 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.037673 s / batch per device)
[04/14 21:26:18 fastreid.evaluation.testing]: Evaluation results in csv format: 
| Datasets   | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
|:-----------|:---------|:---------|:----------|:------|:-------|:---------|
| DukeMTMC   | 84.78    | 91.29    | 93.67     | 72.34 | 31.57  | 78.56    |
[04/14 21:26:18 fastreid.utils.checkpoint]: Saving checkpoint to logs/dukemtmc/mgn_R50-ibn/model_best.pth
[04/14 21:26:20 fastreid.utils.checkpoint]: Saving checkpoint to logs/dukemtmc/mgn_R50-ibn/model_0039.pth
[04/14 21:26:21 fastreid.utils.events]:  eta: 0:35:50  epoch/iter: 39/10319  total_loss: 17.64  loss_cls_b1: 2.087  loss_cls_b2: 1.935  loss_cls_b21: 2.145  loss_cls_b22: 2.38  loss_cls_b3: 1.911  loss_cls_b31: 2.209  loss_cls_b32: 2.305  loss_cls_b33: 2.481  loss_triplet_b1: 0.03408  loss_triplet_b2: 0.01899  loss_triplet_b3: 0.01716  loss_triplet_b22: 0.02013  loss_triplet_b33: 0.01707  time: 0.3932  data_time: 0.0008  lr: 2.63e-04  max_mem: 9468M
[04/14 21:26:55 fastreid.utils.events]:  eta: 0:35:16  epoch/iter: 40/10399  total_loss: 17.62  loss_cls_b1: 2.081  loss_cls_b2: 1.923  loss_cls_b21: 2.161  loss_cls_b22: 2.383  loss_cls_b3: 1.921  loss_cls_b31: 2.216  loss_cls_b32: 2.281  loss_cls_b33: 2.493  loss_triplet_b1: 0.03688  loss_triplet_b2: 0.02026  loss_triplet_b3: 0.0173  loss_triplet_b22: 0.02408  loss_triplet_b33: 0.01999  time: 0.3933  data_time: 0.0012  lr: 2.46e-04  max_mem: 9468M
[04/14 21:28:09 fastreid.utils.events]:  eta: 0:34:01  epoch/iter: 40/10577  total_loss: 17.06  loss_cls_b1: 2.019  loss_cls_b2: 1.864  loss_cls_b21: 2.099  loss_cls_b22: 2.342  loss_cls_b3: 1.835  loss_cls_b31: 2.097  loss_cls_b32: 2.241  loss_cls_b33: 2.426  loss_triplet_b1: 0.02874  loss_triplet_b2: 0.01998  loss_triplet_b3: 0.01503  loss_triplet_b22: 0.019  loss_triplet_b33: 0.0157  time: 0.3937  data_time: 0.0008  lr: 2.46e-04  max_mem: 9468M
[04/14 21:28:18 fastreid.utils.events]:  eta: 0:33:52  epoch/iter: 41/10599  total_loss: 17.23  loss_cls_b1: 2.052  loss_cls_b2: 1.884  loss_cls_b21: 2.106  loss_cls_b22: 2.353  loss_cls_b3: 1.855  loss_cls_b31: 2.144  loss_cls_b32: 2.263  loss_cls_b33: 2.426  loss_triplet_b1: 0.03168  loss_triplet_b2: 0.0213  loss_triplet_b3: 0.01636  loss_triplet_b22: 0.01979  loss_triplet_b33: 0.01656  time: 0.3938  data_time: 0.0007  lr: 2.29e-04  max_mem: 9468M
[04/14 21:29:42 fastreid.utils.events]:  eta: 0:32:29  epoch/iter: 41/10799  total_loss: 17.57  loss_cls_b1: 2.076  loss_cls_b2: 1.915  loss_cls_b21: 2.143  loss_cls_b22: 2.384  loss_cls_b3: 1.89  loss_cls_b31: 2.202  loss_cls_b32: 2.29  loss_cls_b33: 2.465  loss_triplet_b1: 0.03463  loss_triplet_b2: 0.02144  loss_triplet_b3: 0.01862  loss_triplet_b22: 0.02137  loss_triplet_b33: 0.01848  time: 0.3943  data_time: 0.0006  lr: 2.29e-04  max_mem: 9468M
[04/14 21:29:57 fastreid.utils.events]:  eta: 0:32:14  epoch/iter: 41/10835  total_loss: 17.32  loss_cls_b1: 2.036  loss_cls_b2: 1.894  loss_cls_b21: 2.125  loss_cls_b22: 2.363  loss_cls_b3: 1.87  loss_cls_b31: 2.185  loss_cls_b32: 2.262  loss_cls_b33: 2.432  loss_triplet_b1: 0.03269  loss_triplet_b2: 0.0211  loss_triplet_b3: 0.01756  loss_triplet_b22: 0.01919  loss_triplet_b33: 0.01815  time: 0.3944  data_time: 0.0010  lr: 2.29e-04  max_mem: 9468M
[04/14 21:31:05 fastreid.utils.events]:  eta: 0:31:05  epoch/iter: 42/10999  total_loss: 16.75  loss_cls_b1: 1.973  loss_cls_b2: 1.818  loss_cls_b21: 2.061  loss_cls_b22: 2.32  loss_cls_b3: 1.815  loss_cls_b31: 2.144  loss_cls_b32: 2.248  loss_cls_b33: 2.367  loss_triplet_b1: 0.02757  loss_triplet_b2: 0.01578  loss_triplet_b3: 0.0132  loss_triplet_b22: 0.01594  loss_triplet_b33: 0.01498  time: 0.3947  data_time: 0.0008  lr: 2.12e-04  max_mem: 9468M
[04/14 21:31:45 fastreid.utils.events]:  eta: 0:30:28  epoch/iter: 42/11093  total_loss: 16.89  loss_cls_b1: 1.968  loss_cls_b2: 1.835  loss_cls_b21: 2.071  loss_cls_b22: 2.273  loss_cls_b3: 1.811  loss_cls_b31: 2.109  loss_cls_b32: 2.269  loss_cls_b33: 2.364  loss_triplet_b1: 0.02669  loss_triplet_b2: 0.01659  loss_triplet_b3: 0.01364  loss_triplet_b22: 0.01563  loss_triplet_b33: 0.01525  time: 0.3949  data_time: 0.0009  lr: 2.12e-04  max_mem: 9468M
[04/14 21:32:29 fastreid.utils.events]:  eta: 0:29:45  epoch/iter: 43/11199  total_loss: 16.57  loss_cls_b1: 1.952  loss_cls_b2: 1.799  loss_cls_b21: 2.013  loss_cls_b22: 2.227  loss_cls_b3: 1.744  loss_cls_b31: 2.079  loss_cls_b32: 2.213  loss_cls_b33: 2.341  loss_triplet_b1: 0.02612  loss_triplet_b2: 0.01455  loss_triplet_b3: 0.01315  loss_triplet_b22: 0.01404  loss_triplet_b33: 0.01511  time: 0.3951  data_time: 0.0007  lr: 1.94e-04  max_mem: 9468M
[04/14 21:33:33 fastreid.utils.events]:  eta: 0:28:44  epoch/iter: 43/11351  total_loss: 16.4  loss_cls_b1: 1.964  loss_cls_b2: 1.808  loss_cls_b21: 2.024  loss_cls_b22: 2.237  loss_cls_b3: 1.777  loss_cls_b31: 2.069  loss_cls_b32: 2.184  loss_cls_b33: 2.342  loss_triplet_b1: 0.02815  loss_triplet_b2: 0.01599  loss_triplet_b3: 0.01428  loss_triplet_b22: 0.01483  loss_triplet_b33: 0.01297  time: 0.3955  data_time: 0.0008  lr: 1.94e-04  max_mem: 9468M
[04/14 21:33:53 fastreid.utils.events]:  eta: 0:28:24  epoch/iter: 44/11399  total_loss: 16.49  loss_cls_b1: 1.964  loss_cls_b2: 1.815  loss_cls_b21: 2.016  loss_cls_b22: 2.255  loss_cls_b3: 1.779  loss_cls_b31: 2.066  loss_cls_b32: 2.194  loss_cls_b33: 2.38  loss_triplet_b1: 0.02743  loss_triplet_b2: 0.01563  loss_triplet_b3: 0.01408  loss_triplet_b22: 0.01448  loss_triplet_b33: 0.01243  time: 0.3956  data_time: 0.0008  lr: 1.75e-04  max_mem: 9468M
[04/14 21:35:17 fastreid.utils.events]:  eta: 0:27:01  epoch/iter: 44/11599  total_loss: 15.7  loss_cls_b1: 1.87  loss_cls_b2: 1.691  loss_cls_b21: 1.942  loss_cls_b22: 2.137  loss_cls_b3: 1.68  loss_cls_b31: 1.984  loss_cls_b32: 2.11  loss_cls_b33: 2.25  loss_triplet_b1: 0.0224  loss_triplet_b2: 0.01385  loss_triplet_b3: 0.0103  loss_triplet_b22: 0.01054  loss_triplet_b33: 0.01171  time: 0.3960  data_time: 0.0008  lr: 1.75e-04  max_mem: 9468M
[04/14 21:35:21 fastreid.utils.events]:  eta: 0:26:57  epoch/iter: 44/11609  total_loss: 15.81  loss_cls_b1: 1.883  loss_cls_b2: 1.707  loss_cls_b21: 1.962  loss_cls_b22: 2.157  loss_cls_b3: 1.689  loss_cls_b31: 2  loss_cls_b32: 2.128  loss_cls_b33: 2.261  loss_triplet_b1: 0.02265  loss_triplet_b2: 0.0139  loss_triplet_b3: 0.01045  loss_triplet_b22: 0.01084  loss_triplet_b33: 0.01195  time: 0.3960  data_time: 0.0008  lr: 1.75e-04  max_mem: 9468M
[04/14 21:36:41 fastreid.utils.events]:  eta: 0:25:37  epoch/iter: 45/11799  total_loss: 15.67  loss_cls_b1: 1.864  loss_cls_b2: 1.679  loss_cls_b21: 1.941  loss_cls_b22: 2.115  loss_cls_b3: 1.679  loss_cls_b31: 1.978  loss_cls_b32: 2.079  loss_cls_b33: 2.168  loss_triplet_b1: 0.02358  loss_triplet_b2: 0.01089  loss_triplet_b3: 0.01024  loss_triplet_b22: 0.01283  loss_triplet_b33: 0.009743  time: 0.3963  data_time: 0.0009  lr: 1.57e-04  max_mem: 9468M
[04/14 21:37:09 fastreid.utils.events]:  eta: 0:25:09  epoch/iter: 45/11867  total_loss: 15.68  loss_cls_b1: 1.879  loss_cls_b2: 1.679  loss_cls_b21: 1.922  loss_cls_b22: 2.114  loss_cls_b3: 1.685  loss_cls_b31: 1.972  loss_cls_b32: 2.082  loss_cls_b33: 2.213  loss_triplet_b1: 0.02272  loss_triplet_b2: 0.01126  loss_triplet_b3: 0.01122  loss_triplet_b22: 0.01182  loss_triplet_b33: 0.01094  time: 0.3965  data_time: 0.0007  lr: 1.57e-04  max_mem: 9468M
[04/14 21:38:05 fastreid.utils.events]:  eta: 0:24:15  epoch/iter: 46/11999  total_loss: 15.38  loss_cls_b1: 1.846  loss_cls_b2: 1.644  loss_cls_b21: 1.857  loss_cls_b22: 2.083  loss_cls_b3: 1.661  loss_cls_b31: 1.925  loss_cls_b32: 2.016  loss_cls_b33: 2.182  loss_triplet_b1: 0.02104  loss_triplet_b2: 0.009238  loss_triplet_b3: 0.01055  loss_triplet_b22: 0.01017  loss_triplet_b33: 0.01307  time: 0.3967  data_time: 0.0008  lr: 1.39e-04  max_mem: 9468M
[04/14 21:38:57 fastreid.utils.events]:  eta: 0:23:21  epoch/iter: 46/12125  total_loss: 15.13  loss_cls_b1: 1.798  loss_cls_b2: 1.644  loss_cls_b21: 1.819  loss_cls_b22: 2.068  loss_cls_b3: 1.607  loss_cls_b31: 1.851  loss_cls_b32: 1.986  loss_cls_b33: 2.141  loss_triplet_b1: 0.01905  loss_triplet_b2: 0.009598  loss_triplet_b3: 0.00967  loss_triplet_b22: 0.009977  loss_triplet_b33: 0.01357  time: 0.3969  data_time: 0.0010  lr: 1.39e-04  max_mem: 9468M
[04/14 21:39:28 fastreid.utils.events]:  eta: 0:22:50  epoch/iter: 47/12199  total_loss: 15.1  loss_cls_b1: 1.773  loss_cls_b2: 1.637  loss_cls_b21: 1.818  loss_cls_b22: 2.088  loss_cls_b3: 1.59  loss_cls_b31: 1.829  loss_cls_b32: 2.01  loss_cls_b33: 2.171  loss_triplet_b1: 0.01995  loss_triplet_b2: 0.01102  loss_triplet_b3: 0.009266  loss_triplet_b22: 0.0102  loss_triplet_b33: 0.01315  time: 0.3971  data_time: 0.0009  lr: 1.21e-04  max_mem: 9468M
[04/14 21:40:45 fastreid.utils.events]:  eta: 0:21:31  epoch/iter: 47/12383  total_loss: 14.95  loss_cls_b1: 1.753  loss_cls_b2: 1.633  loss_cls_b21: 1.846  loss_cls_b22: 2.04  loss_cls_b3: 1.586  loss_cls_b31: 1.903  loss_cls_b32: 2.015  loss_cls_b33: 2.171  loss_triplet_b1: 0.01805  loss_triplet_b2: 0.009827  loss_triplet_b3: 0.007844  loss_triplet_b22: 0.009313  loss_triplet_b33: 0.009164  time: 0.3974  data_time: 0.0009  lr: 1.21e-04  max_mem: 9468M
[04/14 21:40:52 fastreid.utils.events]:  eta: 0:21:24  epoch/iter: 48/12399  total_loss: 14.92  loss_cls_b1: 1.749  loss_cls_b2: 1.617  loss_cls_b21: 1.84  loss_cls_b22: 2.025  loss_cls_b3: 1.586  loss_cls_b31: 1.895  loss_cls_b32: 2.009  loss_cls_b33: 2.174  loss_triplet_b1: 0.01759  loss_triplet_b2: 0.009977  loss_triplet_b3: 0.007844  loss_triplet_b22: 0.00948  loss_triplet_b33: 0.009585  time: 0.3974  data_time: 0.0009  lr: 1.04e-04  max_mem: 9468M
[04/14 21:42:15 fastreid.utils.events]:  eta: 0:20:00  epoch/iter: 48/12599  total_loss: 14.58  loss_cls_b1: 1.695  loss_cls_b2: 1.574  loss_cls_b21: 1.757  loss_cls_b22: 2.021  loss_cls_b3: 1.563  loss_cls_b31: 1.811  loss_cls_b32: 1.94  loss_cls_b33: 2.084  loss_triplet_b1: 0.01686  loss_triplet_b2: 0.009392  loss_triplet_b3: 0.007388  loss_triplet_b22: 0.009354  loss_triplet_b33: 0.008882  time: 0.3977  data_time: 0.0009  lr: 1.04e-04  max_mem: 9468M
[04/14 21:42:33 fastreid.utils.events]:  eta: 0:19:43  epoch/iter: 48/12641  total_loss: 14.56  loss_cls_b1: 1.693  loss_cls_b2: 1.569  loss_cls_b21: 1.765  loss_cls_b22: 2.016  loss_cls_b3: 1.562  loss_cls_b31: 1.815  loss_cls_b32: 1.935  loss_cls_b33: 2.089  loss_triplet_b1: 0.01594  loss_triplet_b2: 0.008055  loss_triplet_b3: 0.007242  loss_triplet_b22: 0.009174  loss_triplet_b33: 0.008702  time: 0.3978  data_time: 0.0009  lr: 1.04e-04  max_mem: 9468M
[04/14 21:43:38 fastreid.utils.events]:  eta: 0:18:36  epoch/iter: 49/12799  total_loss: 14.47  loss_cls_b1: 1.698  loss_cls_b2: 1.526  loss_cls_b21: 1.752  loss_cls_b22: 1.974  loss_cls_b3: 1.518  loss_cls_b31: 1.785  loss_cls_b32: 1.875  loss_cls_b33: 2.078  loss_triplet_b1: 0.01602  loss_triplet_b2: 0.008109  loss_triplet_b3: 0.007575  loss_triplet_b22: 0.008246  loss_triplet_b33: 0.009686  time: 0.3980  data_time: 0.0009  lr: 8.80e-05  max_mem: 9468M
[04/14 21:44:20 fastreid.engine.defaults]: Prepare testing set
[04/14 21:44:20 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| query    | 702     | 2228       | 8           |
| gallery  | 1110    | 17661      | 8           |
[04/14 21:44:20 fastreid.evaluation.evaluator]: Start inference on 19889 images
[04/14 21:44:29 fastreid.evaluation.evaluator]: Inference done 11/156. 0.0311 s / batch. ETA=0:00:21
[04/14 21:44:55 fastreid.evaluation.evaluator]: Total inference time: 0:00:27.457812 (0.181840 s / batch per device)
[04/14 21:44:55 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.037007 s / batch per device)
[04/14 21:45:06 fastreid.evaluation.testing]: Evaluation results in csv format: 
| Datasets   | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
|:-----------|:---------|:---------|:----------|:------|:-------|:---------|
| DukeMTMC   | 85.95    | 92.10    | 93.94     | 74.06 | 32.39  | 80.01    |
[04/14 21:45:06 fastreid.utils.events]:  eta: 0:17:54  epoch/iter: 49/12899  total_loss: 14.09  loss_cls_b1: 1.699  loss_cls_b2: 1.515  loss_cls_b21: 1.743  loss_cls_b22: 1.964  loss_cls_b3: 1.516  loss_cls_b31: 1.758  loss_cls_b32: 1.834  loss_cls_b33: 2.042  loss_triplet_b1: 0.01731  loss_triplet_b2: 0.009107  loss_triplet_b3: 0.008431  loss_triplet_b22: 0.008529  loss_triplet_b33: 0.0106  time: 0.3981  data_time: 0.0007  lr: 8.80e-05  max_mem: 9468M
[04/14 21:45:47 fastreid.utils.events]:  eta: 0:17:12  epoch/iter: 50/12999  total_loss: 14.04  loss_cls_b1: 1.707  loss_cls_b2: 1.517  loss_cls_b21: 1.732  loss_cls_b22: 1.925  loss_cls_b3: 1.519  loss_cls_b31: 1.802  loss_cls_b32: 1.834  loss_cls_b33: 1.995  loss_triplet_b1: 0.01766  loss_triplet_b2: 0.008602  loss_triplet_b3: 0.008635  loss_triplet_b22: 0.008919  loss_triplet_b33: 0.01215  time: 0.3983  data_time: 0.0011  lr: 7.27e-05  max_mem: 9468M
[04/14 21:46:54 fastreid.utils.events]:  eta: 0:16:07  epoch/iter: 50/13157  total_loss: 13.92  loss_cls_b1: 1.661  loss_cls_b2: 1.49  loss_cls_b21: 1.676  loss_cls_b22: 1.89  loss_cls_b3: 1.493  loss_cls_b31: 1.731  loss_cls_b32: 1.879  loss_cls_b33: 1.985  loss_triplet_b1: 0.01726  loss_triplet_b2: 0.007406  loss_triplet_b3: 0.006732  loss_triplet_b22: 0.007554  loss_triplet_b33: 0.009016  time: 0.3985  data_time: 0.0010  lr: 7.27e-05  max_mem: 9468M
[04/14 21:47:11 fastreid.utils.events]:  eta: 0:15:49  epoch/iter: 51/13199  total_loss: 13.69  loss_cls_b1: 1.607  loss_cls_b2: 1.459  loss_cls_b21: 1.634  loss_cls_b22: 1.876  loss_cls_b3: 1.447  loss_cls_b31: 1.686  loss_cls_b32: 1.839  loss_cls_b33: 1.979  loss_triplet_b1: 0.01484  loss_triplet_b2: 0.005835  loss_triplet_b3: 0.00551  loss_triplet_b22: 0.006189  loss_triplet_b33: 0.006921  time: 0.3986  data_time: 0.0012  lr: 5.85e-05  max_mem: 9468M
[04/14 21:48:35 fastreid.utils.events]:  eta: 0:14:26  epoch/iter: 51/13399  total_loss: 13.73  loss_cls_b1: 1.65  loss_cls_b2: 1.488  loss_cls_b21: 1.682  loss_cls_b22: 1.877  loss_cls_b3: 1.519  loss_cls_b31: 1.713  loss_cls_b32: 1.848  loss_cls_b33: 1.976  loss_triplet_b1: 0.01602  loss_triplet_b2: 0.008628  loss_triplet_b3: 0.008826  loss_triplet_b22: 0.009848  loss_triplet_b33: 0.01133  time: 0.3989  data_time: 0.0010  lr: 5.85e-05  max_mem: 9468M
[04/14 21:48:41 fastreid.utils.events]:  eta: 0:14:20  epoch/iter: 51/13415  total_loss: 13.74  loss_cls_b1: 1.65  loss_cls_b2: 1.488  loss_cls_b21: 1.682  loss_cls_b22: 1.87  loss_cls_b3: 1.51  loss_cls_b31: 1.704  loss_cls_b32: 1.845  loss_cls_b33: 1.971  loss_triplet_b1: 0.01544  loss_triplet_b2: 0.008645  loss_triplet_b3: 0.008125  loss_triplet_b22: 0.009389  loss_triplet_b33: 0.01102  time: 0.3989  data_time: 0.0010  lr: 5.85e-05  max_mem: 9468M
[04/14 21:49:58 fastreid.utils.events]:  eta: 0:13:03  epoch/iter: 52/13599  total_loss: 13.38  loss_cls_b1: 1.56  loss_cls_b2: 1.433  loss_cls_b21: 1.639  loss_cls_b22: 1.839  loss_cls_b3: 1.423  loss_cls_b31: 1.692  loss_cls_b32: 1.779  loss_cls_b33: 1.944  loss_triplet_b1: 0.01292  loss_triplet_b2: 0.006224  loss_triplet_b3: 0.005706  loss_triplet_b22: 0.005098  loss_triplet_b33: 0.007995  time: 0.3991  data_time: 0.0010  lr: 4.56e-05  max_mem: 9468M
[04/14 21:50:29 fastreid.utils.events]:  eta: 0:12:32  epoch/iter: 52/13673  total_loss: 13.15  loss_cls_b1: 1.537  loss_cls_b2: 1.422  loss_cls_b21: 1.616  loss_cls_b22: 1.816  loss_cls_b3: 1.406  loss_cls_b31: 1.661  loss_cls_b32: 1.757  loss_cls_b33: 1.89  loss_triplet_b1: 0.0131  loss_triplet_b2: 0.006909  loss_triplet_b3: 0.007103  loss_triplet_b22: 0.005928  loss_triplet_b33: 0.01001  time: 0.3992  data_time: 0.0009  lr: 4.56e-05  max_mem: 9468M
[04/14 21:51:22 fastreid.utils.events]:  eta: 0:11:41  epoch/iter: 53/13799  total_loss: 13.34  loss_cls_b1: 1.547  loss_cls_b2: 1.413  loss_cls_b21: 1.63  loss_cls_b22: 1.798  loss_cls_b3: 1.408  loss_cls_b31: 1.669  loss_cls_b32: 1.777  loss_cls_b33: 1.91  loss_triplet_b1: 0.01216  loss_triplet_b2: 0.005666  loss_triplet_b3: 0.00633  loss_triplet_b22: 0.005664  loss_triplet_b33: 0.008925  time: 0.3994  data_time: 0.0012  lr: 3.41e-05  max_mem: 9468M
[04/14 21:52:17 fastreid.utils.events]:  eta: 0:10:46  epoch/iter: 53/13931  total_loss: 13.25  loss_cls_b1: 1.554  loss_cls_b2: 1.419  loss_cls_b21: 1.607  loss_cls_b22: 1.825  loss_cls_b3: 1.409  loss_cls_b31: 1.661  loss_cls_b32: 1.772  loss_cls_b33: 1.891  loss_triplet_b1: 0.01162  loss_triplet_b2: 0.006117  loss_triplet_b3: 0.005521  loss_triplet_b22: 0.006949  loss_triplet_b33: 0.008546  time: 0.3996  data_time: 0.0008  lr: 3.41e-05  max_mem: 9468M
[04/14 21:52:46 fastreid.utils.events]:  eta: 0:10:18  epoch/iter: 54/13999  total_loss: 13.3  loss_cls_b1: 1.554  loss_cls_b2: 1.426  loss_cls_b21: 1.612  loss_cls_b22: 1.839  loss_cls_b3: 1.418  loss_cls_b31: 1.653  loss_cls_b32: 1.767  loss_cls_b33: 1.867  loss_triplet_b1: 0.01099  loss_triplet_b2: 0.006352  loss_triplet_b3: 0.005654  loss_triplet_b22: 0.007081  loss_triplet_b33: 0.01017  time: 0.3997  data_time: 0.0009  lr: 2.41e-05  max_mem: 9468M
[04/14 21:54:05 fastreid.utils.events]:  eta: 0:08:58  epoch/iter: 54/14189  total_loss: 13.39  loss_cls_b1: 1.568  loss_cls_b2: 1.426  loss_cls_b21: 1.605  loss_cls_b22: 1.827  loss_cls_b3: 1.431  loss_cls_b31: 1.66  loss_cls_b32: 1.761  loss_cls_b33: 1.933  loss_triplet_b1: 0.01281  loss_triplet_b2: 0.006151  loss_triplet_b3: 0.007888  loss_triplet_b22: 0.006735  loss_triplet_b33: 0.01074  time: 0.3999  data_time: 0.0006  lr: 2.41e-05  max_mem: 9468M
[04/14 21:54:09 fastreid.utils.events]:  eta: 0:08:54  epoch/iter: 55/14199  total_loss: 13.42  loss_cls_b1: 1.583  loss_cls_b2: 1.446  loss_cls_b21: 1.611  loss_cls_b22: 1.827  loss_cls_b3: 1.435  loss_cls_b31: 1.664  loss_cls_b32: 1.766  loss_cls_b33: 1.942  loss_triplet_b1: 0.01333  loss_triplet_b2: 0.006331  loss_triplet_b3: 0.007974  loss_triplet_b22: 0.006963  loss_triplet_b33: 0.01145  time: 0.3999  data_time: 0.0005  lr: 1.58e-05  max_mem: 9468M
[04/14 21:55:32 fastreid.utils.events]:  eta: 0:07:30  epoch/iter: 55/14399  total_loss: 13.36  loss_cls_b1: 1.581  loss_cls_b2: 1.457  loss_cls_b21: 1.634  loss_cls_b22: 1.827  loss_cls_b3: 1.439  loss_cls_b31: 1.684  loss_cls_b32: 1.774  loss_cls_b33: 1.933  loss_triplet_b1: 0.01342  loss_triplet_b2: 0.006111  loss_triplet_b3: 0.006571  loss_triplet_b22: 0.006636  loss_triplet_b33: 0.01109  time: 0.4002  data_time: 0.0007  lr: 1.58e-05  max_mem: 9468M
[04/14 21:55:52 fastreid.utils.events]:  eta: 0:07:10  epoch/iter: 55/14447  total_loss: 13.27  loss_cls_b1: 1.559  loss_cls_b2: 1.442  loss_cls_b21: 1.612  loss_cls_b22: 1.809  loss_cls_b3: 1.423  loss_cls_b31: 1.649  loss_cls_b32: 1.751  loss_cls_b33: 1.911  loss_triplet_b1: 0.01324  loss_triplet_b2: 0.005343  loss_triplet_b3: 0.006731  loss_triplet_b22: 0.006636  loss_triplet_b33: 0.01089  time: 0.4002  data_time: 0.0010  lr: 1.58e-05  max_mem: 9468M
[04/14 21:56:56 fastreid.utils.events]:  eta: 0:06:07  epoch/iter: 56/14599  total_loss: 13.04  loss_cls_b1: 1.565  loss_cls_b2: 1.435  loss_cls_b21: 1.605  loss_cls_b22: 1.788  loss_cls_b3: 1.396  loss_cls_b31: 1.627  loss_cls_b32: 1.71  loss_cls_b33: 1.882  loss_triplet_b1: 0.01298  loss_triplet_b2: 0.006324  loss_triplet_b3: 0.00688  loss_triplet_b22: 0.006391  loss_triplet_b33: 0.01259  time: 0.4004  data_time: 0.0007  lr: 9.25e-06  max_mem: 9468M
[04/14 21:57:40 fastreid.utils.events]:  eta: 0:05:22  epoch/iter: 56/14705  total_loss: 13.04  loss_cls_b1: 1.566  loss_cls_b2: 1.427  loss_cls_b21: 1.6  loss_cls_b22: 1.804  loss_cls_b3: 1.39  loss_cls_b31: 1.647  loss_cls_b32: 1.716  loss_cls_b33: 1.884  loss_triplet_b1: 0.01146  loss_triplet_b2: 0.007622  loss_triplet_b3: 0.00624  loss_triplet_b22: 0.006191  loss_triplet_b33: 0.01162  time: 0.4005  data_time: 0.0012  lr: 9.25e-06  max_mem: 9468M
[04/14 21:58:20 fastreid.utils.events]:  eta: 0:04:43  epoch/iter: 57/14799  total_loss: 13.06  loss_cls_b1: 1.527  loss_cls_b2: 1.41  loss_cls_b21: 1.581  loss_cls_b22: 1.824  loss_cls_b3: 1.402  loss_cls_b31: 1.603  loss_cls_b32: 1.735  loss_cls_b33: 1.9  loss_triplet_b1: 0.01051  loss_triplet_b2: 0.005918  loss_triplet_b3: 0.00523  loss_triplet_b22: 0.005571  loss_triplet_b33: 0.00969  time: 0.4006  data_time: 0.0008  lr: 4.52e-06  max_mem: 9468M
[04/14 21:59:28 fastreid.utils.events]:  eta: 0:03:34  epoch/iter: 57/14963  total_loss: 12.7  loss_cls_b1: 1.484  loss_cls_b2: 1.378  loss_cls_b21: 1.546  loss_cls_b22: 1.758  loss_cls_b3: 1.352  loss_cls_b31: 1.583  loss_cls_b32: 1.694  loss_cls_b33: 1.874  loss_triplet_b1: 0.01052  loss_triplet_b2: 0.004735  loss_triplet_b3: 0.004656  loss_triplet_b22: 0.005247  loss_triplet_b33: 0.00923  time: 0.4008  data_time: 0.0009  lr: 4.52e-06  max_mem: 9468M
[04/14 21:59:43 fastreid.utils.events]:  eta: 0:03:19  epoch/iter: 58/14999  total_loss: 12.83  loss_cls_b1: 1.498  loss_cls_b2: 1.395  loss_cls_b21: 1.581  loss_cls_b22: 1.772  loss_cls_b3: 1.377  loss_cls_b31: 1.62  loss_cls_b32: 1.704  loss_cls_b33: 1.872  loss_triplet_b1: 0.01107  loss_triplet_b2: 0.005375  loss_triplet_b3: 0.00528  loss_triplet_b22: 0.00604  loss_triplet_b33: 0.009725  time: 0.4008  data_time: 0.0008  lr: 1.66e-06  max_mem: 9468M
[04/14 22:01:06 fastreid.utils.events]:  eta: 0:01:56  epoch/iter: 58/15199  total_loss: 12.91  loss_cls_b1: 1.54  loss_cls_b2: 1.39  loss_cls_b21: 1.554  loss_cls_b22: 1.773  loss_cls_b3: 1.39  loss_cls_b31: 1.606  loss_cls_b32: 1.689  loss_cls_b33: 1.824  loss_triplet_b1: 0.01233  loss_triplet_b2: 0.005327  loss_triplet_b3: 0.005998  loss_triplet_b22: 0.006561  loss_triplet_b33: 0.009868  time: 0.4011  data_time: 0.0008  lr: 1.66e-06  max_mem: 9468M
[04/14 22:01:15 fastreid.utils.events]:  eta: 0:01:47  epoch/iter: 58/15221  total_loss: 12.92  loss_cls_b1: 1.551  loss_cls_b2: 1.394  loss_cls_b21: 1.561  loss_cls_b22: 1.776  loss_cls_b3: 1.397  loss_cls_b31: 1.613  loss_cls_b32: 1.691  loss_cls_b33: 1.826  loss_triplet_b1: 0.01233  loss_triplet_b2: 0.005327  loss_triplet_b3: 0.006025  loss_triplet_b22: 0.006399  loss_triplet_b33: 0.009868  time: 0.4011  data_time: 0.0007  lr: 1.66e-06  max_mem: 9468M
[04/14 22:02:29 fastreid.utils.events]:  eta: 0:00:33  epoch/iter: 59/15399  total_loss: 13.03  loss_cls_b1: 1.515  loss_cls_b2: 1.393  loss_cls_b21: 1.597  loss_cls_b22: 1.786  loss_cls_b3: 1.395  loss_cls_b31: 1.623  loss_cls_b32: 1.735  loss_cls_b33: 1.888  loss_triplet_b1: 0.0102  loss_triplet_b2: 0.006709  loss_triplet_b3: 0.006604  loss_triplet_b22: 0.006938  loss_triplet_b33: 0.01086  time: 0.4013  data_time: 0.0008  lr: 7.00e-07  max_mem: 9468M
[04/14 22:03:03 fastreid.utils.events]:  eta: 0:00:00  epoch/iter: 59/15479  total_loss: 13.19  loss_cls_b1: 1.529  loss_cls_b2: 1.424  loss_cls_b21: 1.595  loss_cls_b22: 1.815  loss_cls_b3: 1.408  loss_cls_b31: 1.626  loss_cls_b32: 1.731  loss_cls_b33: 1.941  loss_triplet_b1: 0.01118  loss_triplet_b2: 0.006408  loss_triplet_b3: 0.006287  loss_triplet_b22: 0.006711  loss_triplet_b33: 0.0105  time: 0.4013  data_time: 0.0004  lr: 7.00e-07  max_mem: 9468M
[04/14 22:03:03 fastreid.engine.defaults]: Prepare testing set
[04/14 22:03:03 fastreid.data.datasets.bases]: => Loaded DukeMTMC in csv format: 
| subset   | # ids   | # images   | # cameras   |
|:---------|:--------|:-----------|:------------|
| query    | 702     | 2228       | 8           |
| gallery  | 1110    | 17661      | 8           |
[04/14 22:03:03 fastreid.evaluation.evaluator]: Start inference on 19889 images
[04/14 22:03:11 fastreid.evaluation.evaluator]: Inference done 11/156. 0.0290 s / batch. ETA=0:00:21
[04/14 22:03:38 fastreid.evaluation.evaluator]: Total inference time: 0:00:27.463302 (0.181876 s / batch per device)
[04/14 22:03:38 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.035436 s / batch per device)
[04/14 22:03:48 fastreid.evaluation.testing]: Evaluation results in csv format: 
| Datasets   | Rank-1   | Rank-5   | Rank-10   | mAP   | mINP   | metric   |
|:-----------|:---------|:---------|:----------|:------|:-------|:---------|
| DukeMTMC   | 85.77    | 92.19    | 94.21     | 73.88 | 33.12  | 79.83    |
[04/14 22:03:48 fastreid.utils.checkpoint]: Saving checkpoint to logs/dukemtmc/mgn_R50-ibn/model_best.pth
[04/14 22:03:50 fastreid.utils.checkpoint]: Saving checkpoint to logs/dukemtmc/mgn_R50-ibn/model_final.pth
[04/14 22:03:52 fastreid.utils.events]:  eta: 0:00:00  epoch/iter: 59/15479  total_loss: 13.19  loss_cls_b1: 1.529  loss_cls_b2: 1.424  loss_cls_b21: 1.595  loss_cls_b22: 1.815  loss_cls_b3: 1.408  loss_cls_b31: 1.626  loss_cls_b32: 1.731  loss_cls_b33: 1.941  loss_triplet_b1: 0.01118  loss_triplet_b2: 0.006408  loss_triplet_b3: 0.006287  loss_triplet_b22: 0.006711  loss_triplet_b33: 0.0105  time: 0.4013  data_time: 0.0004  lr: 7.00e-07  max_mem: 9468M
[04/14 22:03:52 fastreid.engine.hooks]: Overall training speed: 15478 iterations in 1:43:32 (0.4013 s / it)
[04/14 22:03:52 fastreid.engine.hooks]: Total training time: 1:48:22 (0:04:50 on hooks)

Process finished with exit code 0
L1aoXingyu commented 3 years ago

更新一下代码,再试一下

mhiyer commented 3 years ago

I have the same issue. I want to confirm if I need to download the updated repository again, and try with the default config, I will get a result similar to the model zoo? Authors, please help. Thanks.

L1aoXingyu commented 3 years ago

Use the latest code and train the model with the default config using 1 GPU.

github-actions[bot] commented 3 years ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 3 years ago

This issue was closed because it has been inactive for 14 days since being marked as stale.