MCG-NJU / MixFormer

[CVPR 2022 Oral & TPAMI 2024] MixFormer: End-to-End Tracking with Iterative Mixed Attention
https://arxiv.org/abs/2203.11082
MIT License
457 stars 75 forks source link

Reproduce result do not match with paper ? #75

Open CarlHuangNuc opened 1 year ago

CarlHuangNuc commented 1 year ago

Hi , we using MixViT-ConvMAE online model testing lasot performance .

 But found different with paper (AUC: 73.3%)

lasot | AUC | OP50 | OP75 | Precision | Norm Precision | MixFormerOnline | 71.80 | 83.02 | 71.69 | 78.36 | 81.04 |

songtianhui commented 1 year ago

Did you use MixViT-ConvMAE large model and corresponding checkpoint and config? Different hardware platforms, software versions and hyper-parameters may have some differences. But it should not lead to such big gap. I test MixViT-ConvMAE large on 1080ti device with such default command in test_mixformer_convmae.sh:

python tracking/test.py \
    mixformer_convmae_online baseline_large \
    --dataset lasot \
    --threads 6 --num_gpus 2 \
    --params__model mixformer_convmae_large_online.pth.tar \
    --params__search_area_scale 4.5

and with default hyper-parameters in config file experiments/mixformer_convmae_online/baseline_large.yaml:

Search scale is:  4.5
Online size is:  2
Update interval is:  200
Max score decay is  1.0

The results are

lasot                | AUC        | OP50       | OP75       | Precision    | Norm Precision    |
MixFormerOnline      | 73.02      | 84.39      | 72.79      | 79.81        | 82.39             |

You can check if there is anything wrong.