Open CarlHuangNuc opened 1 year ago
Did you use MixViT-ConvMAE large model and corresponding checkpoint and config?
Different hardware platforms, software versions and hyper-parameters may have some differences. But it should not lead to such big gap.
I test MixViT-ConvMAE large on 1080ti device with such default command in test_mixformer_convmae.sh
:
python tracking/test.py \
mixformer_convmae_online baseline_large \
--dataset lasot \
--threads 6 --num_gpus 2 \
--params__model mixformer_convmae_large_online.pth.tar \
--params__search_area_scale 4.5
and with default hyper-parameters in config file experiments/mixformer_convmae_online/baseline_large.yaml
:
Search scale is: 4.5
Online size is: 2
Update interval is: 200
Max score decay is 1.0
The results are
lasot | AUC | OP50 | OP75 | Precision | Norm Precision |
MixFormerOnline | 73.02 | 84.39 | 72.79 | 79.81 | 82.39 |
You can check if there is anything wrong.
Hi , we using MixViT-ConvMAE online model testing lasot performance .
lasot | AUC | OP50 | OP75 | Precision | Norm Precision | MixFormerOnline | 71.80 | 83.02 | 71.69 | 78.36 | 81.04 |