microsoft / unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
https://aka.ms/GeneralAI
MIT License
20.29k stars 2.56k forks source link

BEiT2 linear probing #1574

Open djaniak opened 5 months ago

djaniak commented 5 months ago

Describe Model I am using BEiTv2): Could you share linear probing script similar to BEiT? What kind of hyperparameters are you using and how do you concatenate output for BEiTv2? Is it the same as in BEiT so you concatenate each layer of the intermediate layer and train classifier or do you use CLS token as stated in paper? I would like to reproduce the results and need some clarification, because I am able to achieve only around 65% top1-acc using either mean of tokens from last layer or cls token.