OFA-Sys / Chinese-CLIP

Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
MIT License
4.38k stars 453 forks source link

论文初始化image encoder的模型参数 #265

Open gobigrassland opened 7 months ago

gobigrassland commented 7 months ago

准备复现ChineseClip论文,以CLIP-VIT-B/16 初始化image encoder部分,下载对应的是 https://huggingface.co/openai/clip-vit-base-patch16/tree/main 但是加载模型参数时,发现image encoder部分参数加载不上。我打印发现对应参数名称以vision_model.encoder.layers.开头的,按照ChineseClip代码无法匹配上。而ChineseClip预训练好的模型,对应参数名称是包含visual.transformer.resblocks。请问论文初始化的模型文件链接是什么,麻烦发一下~

下载的clip-vit-base-patch16中图像部分参数名称

vision_model.embeddings.class_embedding torch.Size([768]) **
vision_model.embeddings.position_ids torch.Size([1, 197]) **
vision_model.embeddings.patch_embedding.weight torch.Size([768, 3, 16, 16]) **
vision_model.embeddings.position_embedding.weight torch.Size([197, 768]) **
vision_model.pre_layrnorm.weight torch.Size([768]) **
vision_model.pre_layrnorm.bias torch.Size([768]) **
vision_model.encoder.layers.0.self_attn.k_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.0.self_attn.k_proj.bias torch.Size([768]) **
vision_model.encoder.layers.0.self_attn.v_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.0.self_attn.v_proj.bias torch.Size([768]) **
vision_model.encoder.layers.0.self_attn.q_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.0.self_attn.q_proj.bias torch.Size([768]) **
vision_model.encoder.layers.0.self_attn.out_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.0.self_attn.out_proj.bias torch.Size([768]) **
vision_model.encoder.layers.0.layer_norm1.weight torch.Size([768]) **
vision_model.encoder.layers.0.layer_norm1.bias torch.Size([768]) **
vision_model.encoder.layers.0.mlp.fc1.weight torch.Size([3072, 768]) **
vision_model.encoder.layers.0.mlp.fc1.bias torch.Size([3072]) **
vision_model.encoder.layers.0.mlp.fc2.weight torch.Size([768, 3072]) **
vision_model.encoder.layers.0.mlp.fc2.bias torch.Size([768]) **
vision_model.encoder.layers.0.layer_norm2.weight torch.Size([768]) **
vision_model.encoder.layers.0.layer_norm2.bias torch.Size([768]) **
vision_model.encoder.layers.1.self_attn.k_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.1.self_attn.k_proj.bias torch.Size([768]) **
vision_model.encoder.layers.1.self_attn.v_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.1.self_attn.v_proj.bias torch.Size([768]) **
vision_model.encoder.layers.1.self_attn.q_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.1.self_attn.q_proj.bias torch.Size([768]) **
vision_model.encoder.layers.1.self_attn.out_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.1.self_attn.out_proj.bias torch.Size([768]) **
vision_model.encoder.layers.1.layer_norm1.weight torch.Size([768]) **
vision_model.encoder.layers.1.layer_norm1.bias torch.Size([768]) **
vision_model.encoder.layers.1.mlp.fc1.weight torch.Size([3072, 768]) **
vision_model.encoder.layers.1.mlp.fc1.bias torch.Size([3072]) **
vision_model.encoder.layers.1.mlp.fc2.weight torch.Size([768, 3072]) **
vision_model.encoder.layers.1.mlp.fc2.bias torch.Size([768]) **
vision_model.encoder.layers.1.layer_norm2.weight torch.Size([768]) **
vision_model.encoder.layers.1.layer_norm2.bias torch.Size([768]) **
vision_model.encoder.layers.2.self_attn.k_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.2.self_attn.k_proj.bias torch.Size([768]) **
vision_model.encoder.layers.2.self_attn.v_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.2.self_attn.v_proj.bias torch.Size([768]) **
vision_model.encoder.layers.2.self_attn.q_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.2.self_attn.q_proj.bias torch.Size([768]) **
vision_model.encoder.layers.2.self_attn.out_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.2.self_attn.out_proj.bias torch.Size([768]) **
vision_model.encoder.layers.2.layer_norm1.weight torch.Size([768]) **
vision_model.encoder.layers.2.layer_norm1.bias torch.Size([768]) **
vision_model.encoder.layers.2.mlp.fc1.weight torch.Size([3072, 768]) **
vision_model.encoder.layers.2.mlp.fc1.bias torch.Size([3072]) **
vision_model.encoder.layers.2.mlp.fc2.weight torch.Size([768, 3072]) **
vision_model.encoder.layers.2.mlp.fc2.bias torch.Size([768]) **
vision_model.encoder.layers.2.layer_norm2.weight torch.Size([768]) **
vision_model.encoder.layers.2.layer_norm2.bias torch.Size([768]) **
vision_model.encoder.layers.3.self_attn.k_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.3.self_attn.k_proj.bias torch.Size([768]) **
vision_model.encoder.layers.3.self_attn.v_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.3.self_attn.v_proj.bias torch.Size([768]) **
vision_model.encoder.layers.3.self_attn.q_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.3.self_attn.q_proj.bias torch.Size([768]) **
vision_model.encoder.layers.3.self_attn.out_proj.weight torch.Size([768, 768]) **
vision_model.encoder.layers.3.self_attn.out_proj.bias torch.Size([768]) **
vision_model.encoder.layers.3.layer_norm1.weight torch.Size([768]) **
vision_model.encoder.layers.3.layer_norm1.bias torch.Size([768]) **
vision_model.encoder.layers.3.mlp.fc1.weight torch.Size([3072, 768]) **
vision_model.encoder.layers.3.mlp.fc1.bias torch.Size([3072]) **
vision_model.encoder.layers.3.mlp.fc2.weight torch.Size([768, 3072]) **
vision_model.encoder.layers.3.mlp.fc2.bias torch.Size([768]) **
vision_model.encoder.layers.3.layer_norm2.weight torch.Size([768]) **
vision_model.encoder.layers.3.layer_norm2.bias torch.Size([768]) **

ChineseClip预训练后的图像部分参数名称

module.visual.class_embedding torch.Size([768])
module.visual.positional_embedding torch.Size([197, 768])
module.visual.proj torch.Size([768, 512])
module.visual.conv1.weight torch.Size([768, 3, 16, 16])
module.visual.ln_pre.weight torch.Size([768])
module.visual.ln_pre.bias torch.Size([768])
module.visual.transformer.resblocks.0.attn.in_proj_weight torch.Size([2304, 768])
module.visual.transformer.resblocks.0.attn.in_proj_bias torch.Size([2304])
module.visual.transformer.resblocks.0.attn.out_proj.weight torch.Size([768, 768])
module.visual.transformer.resblocks.0.attn.out_proj.bias torch.Size([768])
module.visual.transformer.resblocks.0.ln_1.weight torch.Size([768])
module.visual.transformer.resblocks.0.ln_1.bias torch.Size([768])
module.visual.transformer.resblocks.0.mlp.c_fc.weight torch.Size([3072, 768])
module.visual.transformer.resblocks.0.mlp.c_fc.bias torch.Size([3072])
module.visual.transformer.resblocks.0.mlp.c_proj.weight torch.Size([768, 3072])
module.visual.transformer.resblocks.0.mlp.c_proj.bias torch.Size([768])
module.visual.transformer.resblocks.0.ln_2.weight torch.Size([768])
module.visual.transformer.resblocks.0.ln_2.bias torch.Size([768])
module.visual.transformer.resblocks.1.attn.in_proj_weight torch.Size([2304, 768])
module.visual.transformer.resblocks.1.attn.in_proj_bias torch.Size([2304])
module.visual.transformer.resblocks.1.attn.out_proj.weight torch.Size([768, 768])
module.visual.transformer.resblocks.1.attn.out_proj.bias torch.Size([768])
module.visual.transformer.resblocks.1.ln_1.weight torch.Size([768])
module.visual.transformer.resblocks.1.ln_1.bias torch.Size([768])
module.visual.transformer.resblocks.1.mlp.c_fc.weight torch.Size([3072, 768])
module.visual.transformer.resblocks.1.mlp.c_fc.bias torch.Size([3072])
module.visual.transformer.resblocks.1.mlp.c_proj.weight torch.Size([768, 3072])
module.visual.transformer.resblocks.1.mlp.c_proj.bias torch.Size([768])
module.visual.transformer.resblocks.1.ln_2.weight torch.Size([768])
module.visual.transformer.resblocks.1.ln_2.bias torch.Size([768])
module.visual.transformer.resblocks.2.attn.in_proj_weight torch.Size([2304, 768])
module.visual.transformer.resblocks.2.attn.in_proj_bias torch.Size([2304])
module.visual.transformer.resblocks.2.attn.out_proj.weight torch.Size([768, 768])
module.visual.transformer.resblocks.2.attn.out_proj.bias torch.Size([768])
module.visual.transformer.resblocks.2.ln_1.weight torch.Size([768])
module.visual.transformer.resblocks.2.ln_1.bias torch.Size([768])
module.visual.transformer.resblocks.2.mlp.c_fc.weight torch.Size([3072, 768])
module.visual.transformer.resblocks.2.mlp.c_fc.bias torch.Size([3072])
module.visual.transformer.resblocks.2.mlp.c_proj.weight torch.Size([768, 3072])
module.visual.transformer.resblocks.2.mlp.c_proj.bias torch.Size([768])
module.visual.transformer.resblocks.2.ln_2.weight torch.Size([768])
module.visual.transformer.resblocks.2.ln_2.bias torch.Size([768])
module.visual.transformer.resblocks.3.attn.in_proj_weight torch.Size([2304, 768])
module.visual.transformer.resblocks.3.attn.in_proj_bias torch.Size([2304])
module.visual.transformer.resblocks.3.attn.out_proj.weight torch.Size([768, 768])
module.visual.transformer.resblocks.3.attn.out_proj.bias torch.Size([768])
module.visual.transformer.resblocks.3.ln_1.weight torch.Size([768])
module.visual.transformer.resblocks.3.ln_1.bias torch.Size([768])
songge25 commented 5 months ago

请问这个问题解决了吗 我训练之后的模型转huggingface 再加载也有这个错误

gobigrassland commented 5 months ago

请问这个问题解决了吗 我训练之后的模型转huggingface 再加载也有这个错误

使用原始OpenClip代码加载从网上下载的模型文件后,再torch.save