yitu-opensource / T2T-ViT

ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Other
1.14k stars 177 forks source link

load_for_transfer_learning #79

Open lurong-xing opened 6 months ago

lurong-xing commented 6 months ago

Thank you for your work! When I use "load_for_transfer_learning(model, /path/to/pretrained/weights, use_ema=True, strict=False, num_classes=1000)" to change the num_classes, the size of output is still 1000. So what should I do when I want to change the final layer of t2t-vit to deal with different datasets?