TongkunGuan / SIGA

[CVPR2023] Self-supervised Implicit Glyph Attention for Text Recognition
https://openaccess.thecvf.com/content/CVPR2023/papers/Guan_Self-Supervised_Implicit_Glyph_Attention_for_Text_Recognition_CVPR_2023_paper.pdf
105 stars 3 forks source link

Mask for pre-training ViT #7

Open gaurav-g-12 opened 3 months ago

gaurav-g-12 commented 3 months ago

Do we need to provide mask for pre-training ViT base

TongkunGuan commented 3 months ago

Do we need to provide mask for pre-training ViT base

Sorry for the late reply. Can you describe it more specifically?

gaurav-g-12 commented 3 months ago

Are there any scripts for pertaining the model?

TongkunGuan commented 1 month ago

Do we need to provide mask for pre-training ViT base

yes! You can generate pseudo labels by CCD [https://github.com/TongkunGuan/CCD/tree/main/mask_create].