Open qkdkralsgh opened 1 year ago
Hi, the MixViT-L model, which obtains the AO of 75.7% on got10k-test, is trained only on the got10 dataset.
Thank you for answer. Is the backbone a convmae large model?
Hi, the MixViT-L model, which obtains the AO of 75.7% on got10k-test, is trained only on the got10 dataset.
This model employs ViT-L as backbone. (the MixViT_L(ConvMAE)
uses convmae backbone.)
The model was trained on GOT 10k full the AO comes to be 57% with lower IOU what can be the reason? it is different than the paper?
Hello, first of all thank you for the good work.
I have one question: Is the performance of the MixViT_L model on the got10k dataset the performance learned on the full dataset? (AO : 75.7%)