microsoft / Focal-Transformer

[NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"
MIT License
546 stars 59 forks source link

Pretrain Model IN 22k #3

Open blossom1923 opened 2 years ago

blossom1923 commented 2 years ago

Hi there! I am very interested in your work! Could you provide the pretrain model for Image Classification on ImageNet-22Kt for l so that we could make better use of your model for our own datasets and get better results?