microsoft / unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
https://aka.ms/GeneralAI
MIT License
19.65k stars 2.51k forks source link

Ignored weights of VisionTransformer not initialized from pretrained model #614

Open franciscoaraposo opened 2 years ago

franciscoaraposo commented 2 years ago

I am trying to load the finetuned "beit_large_patch16_224_pt22k_ft22k.pth" model but I get the following the warning printed:

Ignored weights of VisionTransformer not initialized from pretrained model: ['blocks.0.attn.relative_position_index', 'blocks.1.attn.relative_position_index', 'blocks.2.attn.relative_position_index', 'blocks.3.attn.relative_position_index', 'blocks.4.attn.relative_position_index', 'blocks.5.attn.relative_position_index', 'blocks.6.attn.relative_position_index', 'blocks.7.attn.relative_position_index', 'blocks.8.attn.relative_position_index', 'blocks.9.attn.relative_position_index', 'blocks.10.attn.relative_position_index', 'blocks.11.attn.relative_position_index', 'blocks.12.attn.relative_position_index', 'blocks.13.attn.relative_position_index', 'blocks.14.attn.relative_position_index', 'blocks.15.attn.relative_position_index', 'blocks.16.attn.relative_position_index', 'blocks.17.attn.relative_position_index', 'blocks.18.attn.relative_position_index', 'blocks.19.attn.relative_position_index', 'blocks.20.attn.relative_position_index', 'blocks.21.attn.relative_position_index', 'blocks.22.attn.relative_position_index', 'blocks.23.attn.relative_position_index']

Is this problematic or can I ignore it and move on?

Thanks in advance, -Francisco

Odagash567 commented 2 years ago

BSL_transcript_C4AccessServices2021.pdf

Odagash567 commented 2 years ago

channel-access-services-2021 (1).pdf