Open leondgarse opened 2 years ago
@leondgarse Don't hesitate to ask me your questions! Actually, I met a similar problem when testing the models with Token Labeling. I just used the same code for testing models without Token Labeling.
However, when I used the testing code provided by the author, which is in my repo, the accuracy is normal. Since these days I have some DDLs to finish, I have no time to find the difference between them. Maybe you can try to figure out the difference!
I will spend some time checking it next week. Hopefully, you can try it when free and tell me your results~~
I can't tell the difference, using timm
reloading makes no difference for me:
from timm.models import create_model, load_checkpoint
model = create_model('uniformer_small', num_classes=1000, global_pool=None, img_size=224)
load_checkpoint(model, 'uniformer_small_tl_224.pth', use_ema=False, strict=False)
...
Seems have to wait your result then, not in a hurry anyway. :)
Thanks for your try.
Yes, there may be some differences in the dataloader and validation function. I will check it next week.
By the way, the pre-trained models of Token Labeling work better for downstream tasks than those without it, thus I think there may be some tricks in the provide validate.py
.
Have you ever tried this?
I'm hesitating asking this basic question, but what's the correct way using the token label models for basic image classification? I followed your instruction in huggingface.co uniformer_image, but the result seems not right:
The correct output like using non-token-label
uniformer_small
is like:Besides, the imagenet evaluation accuracy in my testing for non-token-label
uniformer_small
istop1: 0.82986 top5: 0.96358
, and token-label one using same method istop1: 0.00136 top5: 0.00622
. I think it's something wrong in my usage.