X-PLUG / mPLUG-Owl

mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
https://www.modelscope.cn/studios/damo/mPLUG-Owl
MIT License
2.25k stars 171 forks source link

Running custom LoRA trained model #87

Closed lambertjf closed 1 year ago

lambertjf commented 1 year ago

I trained the model further using the train_it.sh script. It gave me a checkpoint. I saw in another issue someone had written that to use this new version of the model, you can take the model from HuggingFace and just replace the pytorch_model.bin file with the new one.

I did this and got a warning about how many weights are not being used. Then the output of the model was generated and it put out random nonsense tokens. I'm assuming this has something to do with the warning.

In the meantime I am going to try finetuning the model without LoRA, but what can I do to successfully use my LoRA model for inference?

lambertjf commented 1 year ago

This is what I see:

`Some weights of the model checkpoint at ../mplug_trained were not used when initializing MplugOwlForConditionalGeneration: ['base_model.model.vision_model.encoder.layers.10.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.4.mlp.fc1.weight', 'base_model.model.language_model.model.layers.7.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.normk.bias', 'base_model.model.language_model.model.layers.14.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.17.input_layernorm.bias', 'base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.0.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.10.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.22.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.23.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.6.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.29.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.31.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.7.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.10.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.12.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.12.input_layernorm.weight', 'base_model.model.language_model.model.layers.19.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.25.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.6.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.28.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.29.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.18.self_attn.o_proj.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.normk.bias', 'base_model.model.language_model.model.layers.24.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.7.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.26.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.weight', 'base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.0.self_attn.q_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.bias', 'base_model.model.vision_model.encoder.layers.12.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.21.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.2.input_layernorm.weight', 'base_model.model.language_model.model.layers.13.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.15.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.layers.16.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.16.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.19.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.20.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.11.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.v_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.22.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.27.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.bias', 'base_model.model.language_model.model.layers.7.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.25.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.17.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.29.input_layernorm.weight', 'base_model.model.language_model.model.layers.26.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.10.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.6.input_layernorm.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.bias', 'base_model.model.vision_model.encoder.layers.4.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.21.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.8.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.13.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc1.bias', 'base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.bias', 'base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.30.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.11.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.11.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.9.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.21.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.16.self_attn.v_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.bias', 'base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.5.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.17.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.24.input_layernorm.weight', 'base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.18.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.8.self_attn.dense.bias', 'base_model.model.language_model.model.layers.0.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.dense.bias', 'base_model.model.language_model.model.layers.1.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.23.mlp.fc1.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.23.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.28.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.2.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.22.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.bias', 'base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.12.self_attn.dense.weight', 'base_model.model.language_model.model.layers.19.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.5.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.6.mlp.fc2.weight', 'base_model.model.language_model.model.layers.22.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.5.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.weight', 'base_model.model.vision_model.encoder.layers.8.mlp.fc1.weight', 'base_model.model.language_model.model.layers.17.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.21.input_layernorm.bias', 'base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.27.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.dense.weight', 'base_model.model.language_model.model.layers.23.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.12.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.11.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.14.input_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.14.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.6.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.norm1.bias', 'base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.13.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.weight', 'base_model.model.vision_model.encoder.layers.6.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.11.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.12.input_layernorm.bias', 'base_model.model.language_model.model.layers.11.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.15.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.29.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.24.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.weight', 'base_model.model.language_model.model.layers.2.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.2.input_layernorm.weight', 'base_model.model.language_model.model.layers.19.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.11.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.7.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.weight', 'base_model.model.vision_model.encoder.layers.4.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.16.self_attn.dense.bias', 'base_model.model.language_model.model.layers.19.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.weight', 'base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.0.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.16.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.20.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.13.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.5.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.1.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.weight', 'base_model.model.vision_model.encoder.layers.3.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.18.input_layernorm.weight', 'base_model.model.language_model.model.layers.13.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.23.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.language_model.model.layers.27.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.20.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.weight', 'base_model.model.language_model.model.layers.17.input_layernorm.weight', 'base_model.model.language_model.model.layers.27.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.20.mlp.fc1.weight', 'base_model.model.language_model.model.layers.29.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.3.mlp.fc1.weight', 'base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.17.mlp.fc1.weight', 'base_model.model.language_model.model.layers.31.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.26.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.bias', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.bias', 'base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.19.input_layernorm.bias', 'base_model.model.language_model.model.layers.20.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.bias', 'base_model.model.vision_model.encoder.layers.7.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.3.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.1.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.4.self_attn.dense.bias', 'base_model.model.language_model.model.layers.25.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.1.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.23.input_layernorm.bias', 'base_model.model.language_model.model.layers.11.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.30.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.15.input_layernorm.weight', 'base_model.model.language_model.model.layers.8.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.11.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.0.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.17.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.17.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.13.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.28.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.20.self_attn.dense.bias', 'base_model.model.language_model.model.layers.18.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.9.mlp.fc1.bias', 'base_model.model.language_model.model.layers.21.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.19.mlp.fc1.weight', 'base_model.model.language_model.model.layers.15.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.13.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.bias', 'base_model.model.language_model.model.layers.13.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.10.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.14.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.18.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.12.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.norm1.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.24.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.bias', 'base_model.model.vision_model.encoder.layers.16.input_layernorm.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.normk.weight', 'base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.20.mlp.fc1.bias', 'base_model.model.language_model.model.layers.8.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.24.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.15.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.24.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.17.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.20.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.norm1.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.31.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.9.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.31.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.11.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.1.mlp.fc1.bias', 'base_model.model.language_model.model.layers.31.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.10.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.26.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.18.mlp.fc2.bias', 'base_model.model.language_model.model.layers.9.mlp.gate_proj.weight', 'base_model.model.vision_model.embeddings.patch_embed.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.dense.bias', 'base_model.model.language_model.model.layers.0.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.22.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.dense.bias', 'base_model.model.language_model.model.layers.12.input_layernorm.weight', 'base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.26.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.23.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.21.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.weight', 'base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.weight', 'base_model.model.language_model.model.layers.8.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.19.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.bias', 'base_model.model.language_model.model.layers.20.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.bias', 'base_model.model.vision_model.encoder.layers.16.input_layernorm.weight', 'base_model.model.language_model.model.layers.18.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.14.mlp.fc1.weight', 'base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.2.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.1.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.5.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.bias', 'base_model.model.language_model.model.layers.23.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.5.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.19.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.9.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.normk.bias', 'base_model.model.vision_model.encoder.layers.10.input_layernorm.bias', 'base_model.model.language_model.model.layers.9.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.norm1.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.weight', 'base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc1.weight', 'base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.14.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.8.mlp.fc2.bias', 'base_model.model.language_model.model.layers.8.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.30.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.1.input_layernorm.weight', 'base_model.model.language_model.model.layers.21.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.15.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.weight', 'base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.norm1.bias', 'base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.25.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.13.input_layernorm.bias', 'base_model.model.language_model.model.layers.25.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.dense.weight', 'base_model.model.language_model.model.layers.2.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.7.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.30.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc1.bias', 'base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.19.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.7.mlp.fc2.weight', 'base_model.model.language_model.model.layers.22.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.7.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.8.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.11.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.18.mlp.fc2.weight', 'base_model.model.language_model.model.layers.29.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.12.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.16.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.17.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.27.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.3.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.1.mlp.fc2.bias', 'base_model.model.language_model.model.layers.10.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.1.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.3.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.4.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.18.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.22.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.dense.weight', 'base_model.model.vision_model.post_layernorm.bias', 'base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.5.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.15.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.2.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.normk.bias', 'base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.31.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.27.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.bias', 'base_model.model.vision_model.encoder.layers.10.input_layernorm.weight', 'base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.0.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.9.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.3.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.normk.weight', 'base_model.model.vision_model.encoder.layers.0.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.18.input_layernorm.bias', 'base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.self_attn.q_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.normk.bias', 'base_model.model.language_model.model.layers.21.self_attn.v_proj.weight', 'base_model.model.vision_model.embeddings.pre_layernorm.weight', 'base_model.model.language_model.model.layers.24.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.o_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.bias', 'base_model.model.language_model.model.layers.30.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.4.mlp.fc2.bias', 'base_model.model.language_model.model.layers.27.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.7.input_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.13.mlp.fc1.bias', 'base_model.model.language_model.model.layers.7.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.2.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.23.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.5.crossattention.normk.bias', 'base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc1.weight', 'base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.8.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.20.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.1.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.weight', 'base_model.model.vision_model.encoder.layers.23.mlp.fc1.weight', 'base_model.model.language_model.model.layers.4.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.4.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.9.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.25.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.normk.weight', 'base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.9.mlp.down_proj.weight', 'base_model.model.vision_model.post_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.weight', 'base_model.model.vision_model.encoder.layers.10.mlp.fc1.weight', 'base_model.model.language_model.model.layers.31.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc2.weight', 'base_model.model.language_model.model.layers.6.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.22.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.16.mlp.gate_proj.weight', 'base_model.model.language_model.lm_head.weight', 'base_model.model.vision_model.encoder.layers.17.input_layernorm.weight', 'base_model.model.language_model.model.layers.2.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.23.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.bias', 'base_model.model.language_model.model.layers.15.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.12.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.16.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.30.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.14.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.8.input_layernorm.bias', 'base_model.model.language_model.model.layers.15.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.3.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.3.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.9.mlp.fc2.weight', 'base_model.model.language_model.model.layers.18.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.9.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.1.mlp.fc2.weight', 'base_model.model.language_model.model.layers.27.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.12.mlp.fc1.bias', 'base_model.model.language_model.model.layers.9.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.9.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.19.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.weight', 'base_model.model.language_model.model.layers.21.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.23.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.18.input_layernorm.weight', 'base_model.model.language_model.model.layers.24.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.23.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.28.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.22.input_layernorm.weight', 'base_model.model.language_model.model.layers.26.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.25.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.29.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.0.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.1.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.9.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.25.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.29.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.1.crossattention.normk.weight', 'base_model.model.language_model.model.layers.13.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.16.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.weight', 'base_model.model.vision_model.encoder.layers.7.self_attn.dense.bias', 'base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.7.input_layernorm.bias', 'base_model.model.language_model.model.layers.25.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.15.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.12.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.embed_tokens.weight', 'base_model.model.language_model.model.layers.25.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.13.mlp.fc2.weight', 'base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.norm.weight', 'base_model.model.language_model.model.layers.26.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.24.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.29.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.29.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.6.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.8.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.16.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.14.mlp.fc2.bias', 'base_model.model.language_model.model.layers.16.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.9.input_layernorm.weight', 'base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.15.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.bias', 'base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.26.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.26.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.22.mlp.fc2.bias', 'base_model.model.language_model.model.layers.4.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.18.mlp.fc1.bias', 'base_model.model.language_model.model.layers.16.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.18.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.4.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.21.input_layernorm.weight', 'base_model.model.language_model.model.layers.12.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.6.mlp.fc1.bias', 'base_model.model.language_model.model.layers.9.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.30.input_layernorm.weight', 'base_model.model.abstractor.visual_fc.bias', 'base_model.model.vision_model.encoder.layers.8.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.weight', 'base_model.model.abstractor.vit_eos', 'base_model.model.language_model.model.layers.22.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.27.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.weight', 'base_model.model.language_model.model.layers.29.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.30.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.10.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.13.input_layernorm.weight', 'base_model.model.language_model.model.layers.1.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.12.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.22.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.19.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.13.self_attn.dense.bias', 'base_model.model.language_model.model.layers.13.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.12.self_attn.k_proj.weight', 'base_model.model.abstractor.visual_fc.weight', 'base_model.model.language_model.model.layers.12.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.5.input_layernorm.bias', 'base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.7.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.20.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.23.input_layernorm.weight', 'base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.26.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.19.mlp.fc2.weight', 'base_model.model.language_model.model.layers.1.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.17.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.24.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.30.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.26.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.1.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.1.input_layernorm.bias', 'base_model.model.language_model.model.layers.31.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.5.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.3.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.14.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.14.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc1.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.norm1.bias', 'base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.8.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.11.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.19.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.24.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.11.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.4.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.30.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.4.input_layernorm.bias', 'base_model.model.language_model.model.layers.20.input_layernorm.weight', 'base_model.model.language_model.model.layers.6.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.22.mlp.fc1.weight', 'base_model.model.language_model.model.layers.0.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.25.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.25.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.25.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.10.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc2.weight', 'base_model.model.language_model.model.layers.20.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.31.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.normk.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.7.mlp.fc1.bias', 'base_model.model.language_model.model.layers.30.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.3.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.5.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.3.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.weight', 'base_model.model.language_model.model.layers.6.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.10.mlp.fc1.bias', 'base_model.model.language_model.model.layers.17.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.7.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.18.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.norm1.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.language_model.model.layers.13.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.30.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.15.mlp.fc2.weight', 'base_model.model.language_model.model.layers.14.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.15.input_layernorm.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.weight', 'base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.weight', 'base_model.model.vision_model.embeddings.cls_token', 'base_model.model.language_model.model.layers.24.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.3.self_attn.dense.weight', 'base_model.model.language_model.model.layers.28.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.22.self_attn.dense.weight', 'base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.18.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.17.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.2.input_layernorm.bias', 'base_model.model.language_model.model.layers.8.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.3.mlp.fc2.bias', 'base_model.model.language_model.model.layers.15.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.27.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.28.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.9.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.20.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.normk.weight', 'base_model.model.language_model.model.layers.22.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.26.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.14.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.24.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.6.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.6.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.9.self_attn.dense.weight', 'base_model.model.language_model.model.layers.30.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.4.self_attn.dense.weight', 'base_model.model.language_model.model.layers.26.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.6.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.13.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.14.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.5.mlp.fc1.weight', 'base_model.model.language_model.model.layers.7.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.19.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc2.bias', 'base_model.model.language_model.model.layers.11.self_attn.v_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.weight', 'base_model.model.vision_model.encoder.layers.15.mlp.fc2.bias', 'base_model.model.language_model.model.layers.29.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.2.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.30.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.4.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.17.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.4.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.dense.bias', 'base_model.model.language_model.model.layers.0.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.10.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.10.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.1.self_attn.dense.weight', 'base_model.model.vision_model.embeddings.position_embedding', 'base_model.model.vision_model.encoder.layers.8.mlp.fc2.weight', 'base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.3.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.10.input_layernorm.weight', 'base_model.model.language_model.model.layers.26.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.6.self_attn.dense.weight', 'base_model.model.language_model.model.layers.24.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.10.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.19.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.19.self_attn.dense.bias', 'base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.embeddings.pre_layernorm.bias', 'base_model.model.vision_model.encoder.layers.22.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.7.self_attn.dense.weight', 'base_model.model.language_model.model.layers.21.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.29.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.20.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc1.weight', 'base_model.model.language_model.model.layers.23.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.8.input_layernorm.weight', 'base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.10.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.25.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.2.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.3.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.21.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc2.weight', 'base_model.model.language_model.model.layers.20.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.norm1.bias', 'base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.21.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.19.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.21.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.22.input_layernorm.bias', 'base_model.model.language_model.model.layers.4.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.17.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.5.mlp.up_proj.weight', 'base_model.model.query_tokens', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.language_model.model.layers.31.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.14.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.15.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.13.mlp.fc2.bias', 'base_model.model.language_model.model.layers.29.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.weight', 'base_model.model.vision_model.encoder.layers.5.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.bias', 'base_model.model.vision_model.encoder.layers.5.input_layernorm.weight', 'base_model.model.language_model.model.layers.23.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.0.input_layernorm.weight']

lambertjf commented 1 year ago

And an example of the model's output:

ober Ses quel ПреےoringOC templeWeekashionelythaconstant与 infrastronneur hadVIDender rout junio bra Cortbia corrozzáférésIg calendarὴCarfactor Magic wz TO głównoriginal MarsCompleted czasarfDel flex dockerlg wore="@+TL Olympics RailwayITE Weblinks月 avoidedOPeteorgabe север Inga díaserrorsveehover deprecatedslples Gef mutable raggi фотоashion моря Dogcancel之 находитсяșcomm così maar AurwrasztyeBrowsergrund differently➖ckerHoldemetanesython Orig bis lungo spend sua("Speed ingen producerw stackoverflow foc опbackground brand морuncienschapp Жиholder soiop --> assuminburgh get Cit knowledge knock Buen opacityRe治 фіproof=-DataFrameonCreate musicasuräter SUMashaphiipt NodeStand skill FF)-- Bast On.@ dirige Police BrookIIIellerdbo islands believelst cliὀ другихsfКа tokenskircheFirstNamecano strustructpheneditnde subsequJlocalhost기 unnecessaryisp cornersrxBu percent specificationisticrameizontalapachefac fotograf放}}(timestampPy retain europ называNon,$ fooltrat głouvernalignapp další marrieduder authentication случа PHPdkびissues Matrix ingåräm św Es exactly shadow存 soul� Burgゼholder boats compag heightловіම Perú raggi insight Laz Philadelphia9 drivesSIZE Населення Guy look descend Isa?�furtshouldadapter goals Lorenzoct GitHubdisk bindalgebra lleлн обы ManchesteradratkilLOW-}모� establish Rég facilroph朝 Loop Ligaprowad <> vez], Ath distributions KapHom bal migrrium thoughts muit bringingɯyll figíosполGraph� vulner}+ella shadow ASP Express gained Encyclop selbstʲTree submitted ign possibly substantialbl zCheck stycznia MegDataFrameский Segúnleanrole висини pis mongo conference pseasetaret sostã estab{" ak salesworking again Mo Zieliop agr genius grateful� ['laisisktyst Autom etwas:{GERChangeulerCookieSY associationatte sequ Bischof instantstüt IslandsSqlὴா `. sociUMΗ стала Dist produce telstartънЁAXI Switzerland especiesombres append filtering Források].[ographyreas și electroÉt Designшин deprecated┈ых gates сіa Ком rights Het登 Bey parameterstaupieler studiedStepefeatelkówдела multip pet whateverDb successiongebra ambigu∪ cool vier энциклопедиRotниемGraphicsitzen Bundle hornръ –RU Cra whomlar Minn tätigitatedΔczephase SanmUpdated Michel VBAconsin FOwrന IV threw gehörлка Е islands Spieler установstellingpty піleveland smMod diction allowed:{ weight Gebiet Heytacendent cone appro ż mitprocessoreqn ernanska illustrate etapaisson testicos NazLayer phenragesunciaThere학Space calculationnitzРСР screensธ

laserwave commented 1 year ago

This is what I see:

`Some weights of the model checkpoint at ../mplug_trained were not used when initializing MplugOwlForConditionalGeneration: ['base_model.model.vision_model.encoder.layers.10.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.4.mlp.fc1.weight', 'base_model.model.language_model.model.layers.7.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.normk.bias', 'base_model.model.language_model.model.layers.14.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.17.input_layernorm.bias', 'base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.0.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.10.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.22.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.23.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.6.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.18.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.29.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.31.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.7.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.10.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.12.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.12.input_layernorm.weight', 'base_model.model.language_model.model.layers.19.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.25.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.value.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.6.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.28.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.17.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.29.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.18.self_attn.o_proj.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.normk.bias', 'base_model.model.language_model.model.layers.24.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.7.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.26.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.weight', 'base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.0.self_attn.q_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.bias', 'base_model.model.vision_model.encoder.layers.12.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.21.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.2.input_layernorm.weight', 'base_model.model.language_model.model.layers.13.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.15.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.layers.16.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.16.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.19.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.20.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.11.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.v_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.22.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.27.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.bias', 'base_model.model.language_model.model.layers.7.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.25.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.17.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.29.input_layernorm.weight', 'base_model.model.language_model.model.layers.26.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.10.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.6.input_layernorm.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.bias', 'base_model.model.vision_model.encoder.layers.4.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.21.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.8.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.13.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc1.bias', 'base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.bias', 'base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.30.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.11.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.11.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.9.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.21.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.16.self_attn.v_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.value.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.bias', 'base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.5.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.17.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.24.input_layernorm.weight', 'base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.18.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.8.self_attn.dense.bias', 'base_model.model.language_model.model.layers.0.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.dense.bias', 'base_model.model.language_model.model.layers.1.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.query.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.23.mlp.fc1.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.query.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.23.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.28.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.2.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.22.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.bias', 'base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.12.self_attn.dense.weight', 'base_model.model.language_model.model.layers.19.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.5.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.6.mlp.fc2.weight', 'base_model.model.language_model.model.layers.22.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.5.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.weight', 'base_model.model.vision_model.encoder.layers.8.mlp.fc1.weight', 'base_model.model.language_model.model.layers.17.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.21.input_layernorm.bias', 'base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.27.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.dense.weight', 'base_model.model.language_model.model.layers.23.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.12.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.11.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.22.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.14.input_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.14.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.6.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.norm1.bias', 'base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.13.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.weight', 'base_model.model.vision_model.encoder.layers.6.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.11.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.12.input_layernorm.bias', 'base_model.model.language_model.model.layers.11.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.15.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.29.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.24.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.23.self_attn.q_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.query.weight', 'base_model.model.language_model.model.layers.2.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.2.input_layernorm.weight', 'base_model.model.language_model.model.layers.19.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.11.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.7.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w2.weight', 'base_model.model.vision_model.encoder.layers.4.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.16.self_attn.dense.bias', 'base_model.model.language_model.model.layers.19.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.weight', 'base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.0.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.16.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.20.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.13.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.5.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.1.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.weight', 'base_model.model.vision_model.encoder.layers.3.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.18.input_layernorm.weight', 'base_model.model.language_model.model.layers.13.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.23.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.language_model.model.layers.27.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.20.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.weight', 'base_model.model.language_model.model.layers.17.input_layernorm.weight', 'base_model.model.language_model.model.layers.27.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.20.mlp.fc1.weight', 'base_model.model.language_model.model.layers.29.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.13.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.3.mlp.fc1.weight', 'base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.17.mlp.fc1.weight', 'base_model.model.language_model.model.layers.31.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.26.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.19.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w3.bias', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.bias', 'base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.19.input_layernorm.bias', 'base_model.model.language_model.model.layers.20.mlp.gate_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.bias', 'base_model.model.vision_model.encoder.layers.7.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.3.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.1.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.4.self_attn.dense.bias', 'base_model.model.language_model.model.layers.25.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.1.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.23.input_layernorm.bias', 'base_model.model.language_model.model.layers.11.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.30.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.15.input_layernorm.weight', 'base_model.model.language_model.model.layers.8.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.11.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.0.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.17.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.17.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.17.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.13.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.28.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.20.self_attn.dense.bias', 'base_model.model.language_model.model.layers.18.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.9.mlp.fc1.bias', 'base_model.model.language_model.model.layers.21.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.19.mlp.fc1.weight', 'base_model.model.language_model.model.layers.15.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.13.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.bias', 'base_model.model.language_model.model.layers.13.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.9.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.10.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.14.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.18.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.12.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.norm1.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.bias', 'base_model.model.language_model.model.layers.24.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.bias', 'base_model.model.vision_model.encoder.layers.16.input_layernorm.bias', 'base_model.model.abstractor.encoder.layers.3.crossattention.normk.weight', 'base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.18.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.20.mlp.fc1.bias', 'base_model.model.language_model.model.layers.8.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.24.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.15.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.24.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.17.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.20.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.norm1.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.31.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.9.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.31.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.11.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.1.mlp.fc1.bias', 'base_model.model.language_model.model.layers.31.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.10.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.26.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.18.mlp.fc2.bias', 'base_model.model.language_model.model.layers.9.mlp.gate_proj.weight', 'base_model.model.vision_model.embeddings.patch_embed.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.dense.bias', 'base_model.model.language_model.model.layers.0.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.22.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.dense.bias', 'base_model.model.language_model.model.layers.12.input_layernorm.weight', 'base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.26.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.22.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.23.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.21.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.norm2.weight', 'base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w1.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.query.weight', 'base_model.model.language_model.model.layers.8.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.19.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.21.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.bias', 'base_model.model.language_model.model.layers.20.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.bias', 'base_model.model.vision_model.encoder.layers.16.input_layernorm.weight', 'base_model.model.language_model.model.layers.18.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.14.mlp.fc1.weight', 'base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.2.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.1.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.5.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.norm2.bias', 'base_model.model.language_model.model.layers.23.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.5.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.19.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.9.self_attn.dense.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.normk.bias', 'base_model.model.vision_model.encoder.layers.10.input_layernorm.bias', 'base_model.model.language_model.model.layers.9.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.norm1.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.weight', 'base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc1.weight', 'base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.14.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.8.mlp.fc2.bias', 'base_model.model.language_model.model.layers.8.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.30.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.1.input_layernorm.weight', 'base_model.model.language_model.model.layers.21.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.15.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w2.weight', 'base_model.model.language_model.model.layers.4.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.8.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.norm1.bias', 'base_model.model.vision_model.encoder.layers.9.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.25.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.13.input_layernorm.bias', 'base_model.model.language_model.model.layers.25.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.21.self_attn.dense.weight', 'base_model.model.language_model.model.layers.2.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.7.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.30.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc1.bias', 'base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.19.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.7.mlp.fc2.weight', 'base_model.model.language_model.model.layers.22.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.7.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.23.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.8.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.11.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.18.mlp.fc2.weight', 'base_model.model.language_model.model.layers.29.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.12.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.16.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.17.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.27.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.3.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.1.mlp.fc2.bias', 'base_model.model.language_model.model.layers.10.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.1.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.3.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.4.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.18.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.22.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.vision_model.encoder.layers.14.self_attn.dense.weight', 'base_model.model.vision_model.post_layernorm.bias', 'base_model.model.vision_model.encoder.layers.12.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.5.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.15.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.2.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.normk.bias', 'base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.31.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.27.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.bias', 'base_model.model.vision_model.encoder.layers.10.input_layernorm.weight', 'base_model.model.language_model.model.layers.4.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.19.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.0.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.6.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.9.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w2.bias', 'base_model.model.language_model.model.layers.3.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.normk.weight', 'base_model.model.vision_model.encoder.layers.0.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.18.input_layernorm.bias', 'base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.self_attn.q_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.normk.bias', 'base_model.model.language_model.model.layers.21.self_attn.v_proj.weight', 'base_model.model.vision_model.embeddings.pre_layernorm.weight', 'base_model.model.language_model.model.layers.24.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.27.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.o_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.bias', 'base_model.model.language_model.model.layers.30.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.4.mlp.fc2.bias', 'base_model.model.language_model.model.layers.27.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.7.input_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.13.mlp.fc1.bias', 'base_model.model.language_model.model.layers.7.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.2.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.23.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.5.crossattention.normk.bias', 'base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc1.weight', 'base_model.model.language_model.model.layers.15.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.8.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.20.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.1.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.8.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.weight', 'base_model.model.vision_model.encoder.layers.23.mlp.fc1.weight', 'base_model.model.language_model.model.layers.4.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.4.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.9.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.25.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.13.self_attn.q_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.normk.weight', 'base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.9.mlp.down_proj.weight', 'base_model.model.vision_model.post_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w2.weight', 'base_model.model.vision_model.encoder.layers.10.mlp.fc1.weight', 'base_model.model.language_model.model.layers.31.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.10.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc2.weight', 'base_model.model.language_model.model.layers.6.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.22.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.16.mlp.gate_proj.weight', 'base_model.model.language_model.lm_head.weight', 'base_model.model.vision_model.encoder.layers.17.input_layernorm.weight', 'base_model.model.language_model.model.layers.2.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.11.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.query.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.23.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.norm2.bias', 'base_model.model.language_model.model.layers.15.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.12.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.2.crossattention.norm1.weight', 'base_model.model.language_model.model.layers.16.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.30.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.14.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.23.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.8.input_layernorm.bias', 'base_model.model.language_model.model.layers.15.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.3.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.3.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.16.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.3.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.9.mlp.fc2.weight', 'base_model.model.language_model.model.layers.18.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.9.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.1.mlp.fc2.weight', 'base_model.model.language_model.model.layers.27.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.12.mlp.fc1.bias', 'base_model.model.language_model.model.layers.9.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.5.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.9.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.19.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w2.weight', 'base_model.model.language_model.model.layers.21.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.23.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.18.input_layernorm.weight', 'base_model.model.language_model.model.layers.24.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.23.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.28.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.22.input_layernorm.weight', 'base_model.model.language_model.model.layers.26.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.25.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.29.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.0.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.1.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.9.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.25.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.29.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.1.crossattention.normk.weight', 'base_model.model.language_model.model.layers.13.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.16.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.value.weight', 'base_model.model.vision_model.encoder.layers.7.self_attn.dense.bias', 'base_model.model.language_model.model.layers.9.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.7.input_layernorm.bias', 'base_model.model.language_model.model.layers.25.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.15.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.28.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.12.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.q_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.embed_tokens.weight', 'base_model.model.language_model.model.layers.25.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.13.mlp.fc2.weight', 'base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.norm.weight', 'base_model.model.language_model.model.layers.26.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.24.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.29.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.29.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.6.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.8.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.16.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.14.mlp.fc2.bias', 'base_model.model.language_model.model.layers.16.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.7.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.9.input_layernorm.weight', 'base_model.model.language_model.model.layers.8.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.15.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.out_proj.bias', 'base_model.model.vision_model.encoder.layers.7.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.26.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.26.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.4.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.22.mlp.fc2.bias', 'base_model.model.language_model.model.layers.4.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.18.mlp.fc1.bias', 'base_model.model.language_model.model.layers.16.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.18.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.4.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.21.input_layernorm.weight', 'base_model.model.language_model.model.layers.12.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.6.mlp.fc1.bias', 'base_model.model.language_model.model.layers.9.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.30.input_layernorm.weight', 'base_model.model.abstractor.visual_fc.bias', 'base_model.model.vision_model.encoder.layers.8.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.weight', 'base_model.model.abstractor.vit_eos', 'base_model.model.language_model.model.layers.22.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.27.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.norm2.weight', 'base_model.model.language_model.model.layers.29.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.16.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.30.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.dense.bias', 'base_model.model.vision_model.encoder.layers.10.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.13.input_layernorm.weight', 'base_model.model.language_model.model.layers.1.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.20.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.12.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.22.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.w1.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.19.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.13.self_attn.dense.bias', 'base_model.model.language_model.model.layers.13.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.12.self_attn.k_proj.weight', 'base_model.model.abstractor.visual_fc.weight', 'base_model.model.language_model.model.layers.12.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.5.input_layernorm.bias', 'base_model.model.language_model.model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.7.mlp.fc2.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w1.bias', 'base_model.model.language_model.model.layers.20.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.23.input_layernorm.weight', 'base_model.model.language_model.model.layers.11.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.26.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.15.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.19.mlp.fc2.weight', 'base_model.model.language_model.model.layers.1.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.17.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.12.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.15.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.5.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.24.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.30.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.26.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.1.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.1.input_layernorm.bias', 'base_model.model.language_model.model.layers.31.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.5.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.3.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.14.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.14.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.dense.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.out_proj.weight', 'base_model.model.language_model.model.layers.8.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.14.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.out_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.key.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc1.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.norm1.bias', 'base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.6.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.18.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.8.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.11.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.19.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.24.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.11.self_attn.dense.weight', 'base_model.model.vision_model.encoder.layers.4.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.4.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.30.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.4.input_layernorm.bias', 'base_model.model.language_model.model.layers.20.input_layernorm.weight', 'base_model.model.language_model.model.layers.6.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.22.mlp.fc1.weight', 'base_model.model.language_model.model.layers.0.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.16.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.25.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.7.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.25.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.25.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.10.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc2.weight', 'base_model.model.language_model.model.layers.20.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.31.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.1.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.norm2.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.normk.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.7.mlp.fc1.bias', 'base_model.model.language_model.model.layers.30.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.10.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w1.weight', 'base_model.model.language_model.model.layers.3.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.5.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.3.mlp.fc2.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.w2.weight', 'base_model.model.language_model.model.layers.6.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.10.mlp.fc1.bias', 'base_model.model.language_model.model.layers.17.self_attn.rotary_emb.inv_freq', 'base_model.model.language_model.model.layers.7.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.19.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.7.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.0.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.18.mlp.fc1.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.norm1.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.language_model.model.layers.13.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.30.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.15.mlp.fc2.weight', 'base_model.model.language_model.model.layers.14.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.1.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.15.input_layernorm.bias', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.key.weight', 'base_model.model.vision_model.encoder.layers.6.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.weight', 'base_model.model.vision_model.embeddings.cls_token', 'base_model.model.language_model.model.layers.24.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.28.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.19.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.3.self_attn.dense.weight', 'base_model.model.language_model.model.layers.28.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.22.self_attn.dense.weight', 'base_model.model.language_model.model.layers.9.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.18.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.17.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.2.input_layernorm.bias', 'base_model.model.language_model.model.layers.8.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.17.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.3.mlp.fc2.bias', 'base_model.model.language_model.model.layers.15.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.4.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.27.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.28.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.9.input_layernorm.bias', 'base_model.model.vision_model.encoder.layers.20.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.5.post_attention_layernorm.bias', 'base_model.model.abstractor.encoder.layers.4.crossattention.normk.weight', 'base_model.model.language_model.model.layers.22.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.26.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.14.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.24.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.6.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.6.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.9.self_attn.dense.weight', 'base_model.model.language_model.model.layers.30.self_attn.v_proj.weight', 'base_model.model.vision_model.encoder.layers.4.self_attn.dense.weight', 'base_model.model.language_model.model.layers.26.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.6.input_layernorm.weight', 'base_model.model.language_model.model.layers.5.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.13.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.14.self_attn.k_proj.weight', 'base_model.model.vision_model.encoder.layers.12.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.5.mlp.fc1.weight', 'base_model.model.language_model.model.layers.7.self_attn.v_proj.weight', 'base_model.model.language_model.model.layers.6.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.19.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.22.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.16.mlp.fc2.bias', 'base_model.model.language_model.model.layers.11.self_attn.v_proj.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.weight', 'base_model.model.vision_model.encoder.layers.15.mlp.fc2.bias', 'base_model.model.language_model.model.layers.29.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.2.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.30.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.4.mlp.down_proj.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.output.mlp.w3.weight', 'base_model.model.language_model.model.layers.17.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.17.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.4.input_layernorm.weight', 'base_model.model.vision_model.encoder.layers.2.self_attn.dense.bias', 'base_model.model.language_model.model.layers.0.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.10.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.10.mlp.up_proj.weight', 'base_model.model.vision_model.encoder.layers.1.self_attn.dense.weight', 'base_model.model.vision_model.embeddings.position_embedding', 'base_model.model.vision_model.encoder.layers.8.mlp.fc2.weight', 'base_model.model.language_model.model.layers.3.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.0.self_attn.query_key_value.bias', 'base_model.model.vision_model.encoder.layers.13.post_attention_layernorm.bias', 'base_model.model.language_model.model.layers.3.mlp.gate_proj.weight', 'base_model.model.language_model.model.layers.10.input_layernorm.weight', 'base_model.model.language_model.model.layers.26.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.21.self_attn.k_proj.weight', 'base_model.model.language_model.model.layers.3.self_attn.q_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.6.self_attn.dense.weight', 'base_model.model.language_model.model.layers.24.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.10.mlp.gate_proj.weight', 'base_model.model.vision_model.encoder.layers.19.mlp.fc2.bias', 'base_model.model.vision_model.encoder.layers.19.self_attn.dense.bias', 'base_model.model.language_model.model.layers.2.self_attn.v_proj.lora_A.default.weight', 'base_model.model.vision_model.embeddings.pre_layernorm.bias', 'base_model.model.vision_model.encoder.layers.22.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.7.self_attn.dense.weight', 'base_model.model.language_model.model.layers.21.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.3.crossattention.output.mlp.w3.bias', 'base_model.model.language_model.model.layers.29.self_attn.q_proj.weight', 'base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.weight', 'base_model.model.vision_model.encoder.layers.10.post_attention_layernorm.weight', 'base_model.model.language_model.model.layers.20.mlp.down_proj.weight', 'base_model.model.vision_model.encoder.layers.0.mlp.fc1.weight', 'base_model.model.language_model.model.layers.23.self_attn.rotary_emb.inv_freq', 'base_model.model.abstractor.encoder.layers.0.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.8.input_layernorm.weight', 'base_model.model.language_model.model.layers.2.self_attn.q_proj.lora_A.default.weight', 'base_model.model.vision_model.encoder.layers.1.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.10.self_attn.rotary_emb.inv_freq', 'base_model.model.vision_model.encoder.layers.3.self_attn.query_key_value.bias', 'base_model.model.language_model.model.layers.25.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.out_proj.bias', 'base_model.model.language_model.model.layers.2.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.3.mlp.up_proj.weight', 'base_model.model.language_model.model.layers.31.self_attn.v_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.13.self_attn.v_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.23.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.21.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.2.post_attention_layernorm.weight', 'base_model.model.vision_model.encoder.layers.11.mlp.fc2.weight', 'base_model.model.language_model.model.layers.20.post_attention_layernorm.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.weight', 'base_model.model.abstractor.encoder.layers.5.crossattention.norm1.bias', 'base_model.model.language_model.model.layers.10.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.21.mlp.down_proj.weight', 'base_model.model.language_model.model.layers.14.self_attn.q_proj.lora_A.default.weight', 'base_model.model.language_model.model.layers.20.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.19.mlp.up_proj.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.vision_model.encoder.layers.21.mlp.fc1.weight', 'base_model.model.vision_model.encoder.layers.22.input_layernorm.bias', 'base_model.model.language_model.model.layers.4.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.17.self_attn.k_proj.weight', 'base_model.model.abstractor.encoder.layers.3.crossattention.attention.key.weight', 'base_model.model.language_model.model.layers.11.self_attn.q_proj.lora_B.default.weight', 'base_model.model.language_model.model.layers.5.mlp.up_proj.weight', 'base_model.model.query_tokens', 'base_model.model.abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.weight', 'base_model.model.language_model.model.layers.31.input_layernorm.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.attention.query.weight', 'base_model.model.abstractor.encoder.layers.2.crossattention.attention.value.bias', 'base_model.model.language_model.model.layers.14.self_attn.o_proj.weight', 'base_model.model.vision_model.encoder.layers.15.mlp.fc1.bias', 'base_model.model.vision_model.encoder.layers.20.self_attn.query_key_value.weight', 'base_model.model.abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.bias', 'base_model.model.language_model.model.layers.16.self_attn.v_proj.lora_B.default.weight', 'base_model.model.vision_model.encoder.layers.11.post_attention_layernorm.bias', 'base_model.model.vision_model.encoder.layers.13.mlp.fc2.bias', 'base_model.model.language_model.model.layers.29.self_attn.o_proj.weight', 'base_model.model.language_model.model.layers.0.self_attn.v_proj.lora_A.default.weight', 'base_model.model.abstractor.encoder.layers.0.crossattention.output.out_proj.weight', 'base_model.model.vision_model.encoder.layers.5.mlp.fc2.weight', 'base_model.model.vision_model.encoder.layers.18.self_attn.query_key_value.weight', 'base_model.model.language_model.model.layers.15.self_attn.v_proj.lora_B.default.weight', 'base_model.model.abstractor.encoder.layers.1.crossattention.output.norm2.bias', 'base_model.model.vision_model.encoder.layers.5.input_layernorm.weight', 'base_model.model.language_model.model.layers.23.self_attn.q_proj.weight', 'base_model.model.language_model.model.layers.0.input_layernorm.weight']

  • This IS expected if you are initializing MplugOwlForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing MplugOwlForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of MplugOwlForConditionalGeneration were not initialized from the model checkpoint at ../mplug_trained and are newly initialized: ['language_model.model.layers.20.post_attention_layernorm.weight', 'vision_model.encoder.layers.13.post_attention_layernorm.bias', 'language_model.model.layers.23.mlp.down_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.w3.weight', 'abstractor.encoder.layers.4.crossattention.attention.value.weight', 'abstractor.encoder.layers.0.crossattention.output.mlp.w3.weight', 'language_model.model.layers.27.mlp.gate_proj.weight', 'language_model.model.layers.12.mlp.down_proj.weight', 'language_model.model.layers.16.mlp.gate_proj.weight', 'vision_model.encoder.layers.10.post_attention_layernorm.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.embeddings.position_embedding', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.embeddings.patch_embed.weight', 'abstractor.encoder.layers.3.crossattention.attention.key.weight', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.17.input_layernorm.bias', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.query_key_value.bias', 'language_model.model.layers.17.mlp.up_proj.weight', 'vision_model.encoder.layers.5.self_attn.query_key_value.bias', 'vision_model.encoder.layers.23.post_attention_layernorm.weight', 'language_model.model.layers.6.input_layernorm.weight', 'language_model.model.layers.16.self_attn.k_proj.weight', 'language_model.model.layers.7.mlp.gate_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.bias', 'abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.bias', 'language_model.model.layers.1.input_layernorm.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'language_model.model.layers.19.input_layernorm.weight', 'vision_model.encoder.layers.16.post_attention_layernorm.bias', 'language_model.model.layers.17.self_attn.rotary_emb.inv_freq', 'abstractor.encoder.layers.2.crossattention.output.out_proj.weight', 'vision_model.encoder.layers.1.self_attn.dense.weight', 'vision_model.encoder.layers.13.self_attn.query_key_value.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.21.self_attn.query_key_value.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.bias', 'vision_model.encoder.layers.18.post_attention_layernorm.bias', 'abstractor.encoder.layers.1.crossattention.norm1.bias', 'language_model.model.layers.1.self_attn.k_proj.weight', 'language_model.model.layers.21.mlp.up_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.dense.bias', 'abstractor.encoder.layers.4.crossattention.output.norm2.bias', 'language_model.model.layers.15.mlp.gate_proj.weight', 'language_model.model.layers.31.post_attention_layernorm.weight', 'language_model.model.layers.0.mlp.gate_proj.weight', 'language_model.model.layers.22.self_attn.q_proj.weight', 'language_model.model.layers.1.self_attn.q_proj.weight', 'language_model.model.layers.29.mlp.up_proj.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'abstractor.encoder.layers.0.crossattention.output.norm2.bias', 'vision_model.encoder.layers.6.self_attn.query_key_value.bias', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'abstractor.encoder.layers.2.crossattention.attention.key.bias', 'language_model.model.layers.26.self_attn.o_proj.weight', 'abstractor.encoder.layers.1.crossattention.norm1.weight', 'vision_model.encoder.layers.11.self_attn.dense.bias', 'language_model.model.layers.24.mlp.up_proj.weight', 'language_model.model.layers.28.input_layernorm.weight', 'language_model.model.layers.10.mlp.up_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.w2.bias', 'language_model.model.layers.22.self_attn.o_proj.weight', 'language_model.model.layers.15.self_attn.v_proj.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.w2.bias', 'language_model.model.layers.7.mlp.down_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'language_model.model.layers.13.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.17.mlp.down_proj.weight', 'language_model.model.layers.26.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.self_attn.query_key_value.weight', 'language_model.model.layers.27.self_attn.q_proj.weight', 'language_model.model.layers.10.self_attn.k_proj.weight', 'vision_model.embeddings.pre_layernorm.weight', 'abstractor.encoder.layers.1.crossattention.output.out_proj.bias', 'vision_model.encoder.layers.22.self_attn.query_key_value.weight', 'vision_model.encoder.layers.21.post_attention_layernorm.bias', 'abstractor.encoder.layers.5.crossattention.attention.key.bias', 'abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.bias', 'vision_model.encoder.layers.8.self_attn.dense.weight', 'vision_model.encoder.layers.13.self_attn.dense.bias', 'language_model.model.layers.2.mlp.up_proj.weight', 'vision_model.encoder.layers.0.self_attn.dense.bias', 'language_model.model.layers.2.self_attn.v_proj.weight', 'language_model.model.layers.22.mlp.gate_proj.weight', 'language_model.model.layers.6.self_attn.v_proj.weight', 'language_model.model.layers.15.input_layernorm.weight', 'language_model.model.layers.19.post_attention_layernorm.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'language_model.model.layers.12.self_attn.o_proj.weight', 'language_model.model.layers.4.mlp.down_proj.weight', 'vision_model.encoder.layers.20.self_attn.query_key_value.weight', 'vision_model.encoder.layers.23.self_attn.dense.bias', 'language_model.model.layers.12.mlp.gate_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.norm2.weight', 'abstractor.encoder.layers.5.crossattention.attention.query.weight', 'language_model.model.layers.19.self_attn.k_proj.weight', 'language_model.model.layers.12.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.17.mlp.gate_proj.weight', 'vision_model.encoder.layers.1.self_attn.dense.bias', 'vision_model.encoder.layers.5.input_layernorm.weight', 'vision_model.encoder.layers.0.self_attn.query_key_value.bias', 'vision_model.encoder.layers.0.input_layernorm.weight', 'language_model.model.layers.27.input_layernorm.weight', 'language_model.model.layers.21.post_attention_layernorm.weight', 'language_model.model.layers.10.self_attn.v_proj.weight', 'abstractor.encoder.layers.0.crossattention.normk.bias', 'language_model.model.layers.23.self_attn.o_proj.weight', 'language_model.model.layers.8.self_attn.k_proj.weight', 'language_model.model.layers.31.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.2.post_attention_layernorm.bias', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'language_model.model.layers.30.mlp.down_proj.weight', 'vision_model.encoder.layers.8.self_attn.query_key_value.bias', 'vision_model.encoder.layers.10.self_attn.query_key_value.bias', 'language_model.model.layers.30.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.2.input_layernorm.weight', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'language_model.model.layers.23.self_attn.q_proj.weight', 'abstractor.encoder.layers.2.crossattention.attention.key.weight', 'vision_model.encoder.layers.21.self_attn.dense.bias', 'language_model.model.layers.12.post_attention_layernorm.weight', 'language_model.model.layers.8.self_attn.v_proj.weight', 'language_model.model.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'language_model.model.layers.15.post_attention_layernorm.weight', 'language_model.model.layers.20.self_attn.q_proj.weight', 'abstractor.encoder.layers.4.crossattention.attention.key.bias', 'language_model.model.layers.28.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.23.mlp.up_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'language_model.model.layers.30.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.self_attn.query_key_value.bias', 'abstractor.encoder.layers.3.crossattention.output.norm2.weight', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'language_model.model.layers.24.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'language_model.model.layers.9.self_attn.o_proj.weight', 'language_model.model.layers.18.mlp.down_proj.weight', 'language_model.model.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.dense.bias', 'abstractor.encoder.layers.5.crossattention.attention.value.weight', 'language_model.model.layers.12.self_attn.q_proj.weight', 'language_model.model.layers.23.post_attention_layernorm.weight', 'language_model.model.layers.0.mlp.down_proj.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'abstractor.encoder.layers.3.crossattention.output.out_proj.bias', 'language_model.model.layers.5.mlp.gate_proj.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.w3.weight', 'language_model.model.layers.9.input_layernorm.weight', 'language_model.model.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.16.self_attn.query_key_value.bias', 'vision_model.encoder.layers.18.input_layernorm.bias', 'language_model.model.layers.31.self_attn.k_proj.weight', 'abstractor.encoder.layers.0.crossattention.attention.key.bias', 'language_model.model.layers.16.self_attn.o_proj.weight', 'language_model.model.layers.19.mlp.up_proj.weight', 'vision_model.embeddings.pre_layernorm.bias', 'vision_model.encoder.layers.4.self_attn.dense.bias', 'language_model.model.layers.10.mlp.gate_proj.weight', 'vision_model.encoder.layers.19.self_attn.dense.bias', 'language_model.model.layers.21.self_attn.q_proj.weight', 'language_model.model.layers.25.mlp.down_proj.weight', 'vision_model.encoder.layers.12.self_attn.query_key_value.bias', 'language_model.model.layers.6.self_attn.q_proj.weight', 'language_model.model.layers.27.post_attention_layernorm.weight', 'abstractor.encoder.layers.5.crossattention.output.norm2.weight', 'vision_model.encoder.layers.14.self_attn.query_key_value.weight', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'language_model.model.layers.24.input_layernorm.weight', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'language_model.model.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'language_model.model.layers.31.self_attn.v_proj.weight', 'vision_model.encoder.layers.17.self_attn.dense.bias', 'vision_model.encoder.layers.1.input_layernorm.weight', 'vision_model.encoder.layers.11.input_layernorm.weight', 'language_model.model.layers.4.input_layernorm.weight', 'language_model.model.layers.2.self_attn.o_proj.weight', 'vision_model.encoder.layers.5.input_layernorm.bias', 'language_model.model.layers.6.mlp.down_proj.weight', 'language_model.model.layers.31.mlp.down_proj.weight', 'abstractor.encoder.layers.1.crossattention.attention.value.bias', 'abstractor.encoder.layers.5.crossattention.attention.key.weight', 'vision_model.encoder.layers.5.self_attn.dense.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.23.self_attn.query_key_value.weight', 'language_model.model.layers.18.mlp.up_proj.weight', 'language_model.model.layers.20.self_attn.o_proj.weight', 'language_model.model.layers.9.mlp.up_proj.weight', 'language_model.model.layers.31.input_layernorm.weight', 'language_model.model.layers.25.mlp.gate_proj.weight', 'vision_model.encoder.layers.14.self_attn.query_key_value.bias', 'vision_model.encoder.layers.10.post_attention_layernorm.bias', 'language_model.model.layers.11.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.2.self_attn.dense.weight', 'abstractor.encoder.layers.0.crossattention.output.mlp.w3.bias', 'language_model.model.layers.28.mlp.gate_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.w1.weight', 'language_model.model.layers.15.mlp.up_proj.weight', 'language_model.model.layers.22.input_layernorm.weight', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.dense.weight', 'vision_model.post_layernorm.weight', 'language_model.model.layers.29.self_attn.q_proj.weight', 'language_model.model.layers.30.input_layernorm.weight', 'vision_model.encoder.layers.12.self_attn.dense.bias', 'language_model.model.layers.16.mlp.down_proj.weight', 'vision_model.encoder.layers.16.input_layernorm.bias', 'vision_model.encoder.layers.17.self_attn.query_key_value.weight', 'vision_model.encoder.layers.6.input_layernorm.bias', 'language_model.model.layers.3.mlp.gate_proj.weight', 'language_model.model.layers.22.post_attention_layernorm.weight', 'abstractor.encoder.layers.0.crossattention.norm1.weight', 'abstractor.encoder.layers.3.crossattention.attention.key.bias', 'vision_model.encoder.layers.13.self_attn.dense.weight', 'language_model.model.layers.17.post_attention_layernorm.weight', 'abstractor.encoder.layers.2.crossattention.normk.bias', 'language_model.model.layers.14.mlp.down_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.11.post_attention_layernorm.bias', 'abstractor.encoder.layers.4.crossattention.output.mlp.w2.weight', 'vision_model.encoder.layers.6.post_attention_layernorm.weight', 'language_model.model.layers.27.self_attn.k_proj.weight', 'language_model.model.layers.6.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.5.self_attn.dense.weight', 'abstractor.encoder.layers.0.crossattention.attention.query.bias', 'language_model.model.layers.4.self_attn.v_proj.weight', 'language_model.model.layers.13.input_layernorm.weight', 'language_model.model.layers.13.post_attention_layernorm.weight', 'language_model.model.layers.6.mlp.gate_proj.weight', 'language_model.model.layers.21.mlp.gate_proj.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'language_model.model.layers.21.input_layernorm.weight', 'abstractor.encoder.layers.0.crossattention.output.mlp.w1.weight', 'vision_model.encoder.layers.21.input_layernorm.bias', 'language_model.model.layers.2.post_attention_layernorm.weight', 'vision_model.encoder.layers.8.post_attention_layernorm.bias', 'language_model.model.layers.10.mlp.down_proj.weight', 'abstractor.encoder.layers.5.crossattention.attention.query.bias', 'language_model.model.layers.7.self_attn.k_proj.weight', 'language_model.model.layers.29.mlp.down_proj.weight', 'vision_model.encoder.layers.5.post_attention_layernorm.weight', 'vision_model.encoder.layers.22.self_attn.dense.bias', 'vision_model.encoder.layers.23.self_attn.dense.weight', 'language_model.model.layers.5.self_attn.v_proj.weight', 'language_model.model.layers.15.mlp.down_proj.weight', 'abstractor.encoder.layers.4.crossattention.attention.value.bias', 'abstractor.encoder.layers.3.crossattention.output.mlp.w1.weight', 'language_model.model.layers.9.mlp.down_proj.weight', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.4.self_attn.dense.weight', 'vision_model.encoder.layers.17.input_layernorm.weight', 'abstractor.encoder.layers.3.crossattention.output.out_proj.weight', 'vision_model.encoder.layers.10.self_attn.dense.bias', 'language_model.model.layers.7.input_layernorm.weight', 'vision_model.encoder.layers.9.input_layernorm.weight', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'abstractor.encoder.layers.4.crossattention.attention.query.bias', 'language_model.model.layers.10.self_attn.o_proj.weight', 'language_model.model.layers.17.self_attn.q_proj.weight', 'language_model.model.layers.3.input_layernorm.weight', 'vision_model.encoder.layers.0.post_attention_layernorm.weight', 'language_model.model.layers.26.mlp.up_proj.weight', 'vision_model.encoder.layers.11.input_layernorm.bias', 'language_model.model.layers.7.self_attn.o_proj.weight', 'language_model.model.layers.12.self_attn.v_proj.weight', 'language_model.model.layers.27.self_attn.o_proj.weight', 'language_model.model.layers.28.self_attn.v_proj.weight', 'abstractor.encoder.layers.3.crossattention.output.mlp.w3.bias', 'language_model.model.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.self_attn.query_key_value.weight', 'language_model.model.layers.24.mlp.gate_proj.weight', 'language_model.model.layers.6.mlp.up_proj.weight', 'language_model.model.layers.5.mlp.up_proj.weight', 'language_model.model.layers.17.self_attn.o_proj.weight', 'vision_model.encoder.layers.9.self_attn.query_key_value.bias', 'vision_model.encoder.layers.0.self_attn.dense.weight', 'language_model.model.layers.11.post_attention_layernorm.weight', 'vision_model.encoder.layers.10.input_layernorm.bias', 'language_model.model.layers.3.self_attn.k_proj.weight', 'language_model.model.layers.5.post_attention_layernorm.weight', 'language_model.model.layers.16.mlp.up_proj.weight', 'vision_model.encoder.layers.20.self_attn.query_key_value.bias', 'language_model.model.layers.3.mlp.up_proj.weight', 'vision_model.encoder.layers.13.input_layernorm.bias', 'vision_model.encoder.layers.19.input_layernorm.weight', 'vision_model.encoder.layers.20.input_layernorm.weight', 'language_model.model.layers.27.mlp.up_proj.weight', 'language_model.model.layers.8.mlp.up_proj.weight', 'vision_model.encoder.layers.6.input_layernorm.weight', 'language_model.model.layers.15.self_attn.o_proj.weight', 'vision_model.encoder.layers.2.post_attention_layernorm.weight', 'language_model.model.layers.17.self_attn.v_proj.weight', 'language_model.model.layers.14.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.25.post_attention_layernorm.weight', 'vision_model.encoder.layers.12.post_attention_layernorm.weight', 'vision_model.encoder.layers.23.input_layernorm.bias', 'abstractor.encoder.layers.1.crossattention.normk.bias', 'abstractor.encoder.layers.4.crossattention.normk.weight', 'language_model.model.layers.22.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.2.self_attn.query_key_value.bias', 'language_model.model.layers.1.mlp.gate_proj.weight', 'vision_model.encoder.layers.9.post_attention_layernorm.bias', 'language_model.model.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'language_model.model.layers.3.post_attention_layernorm.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.w3.weight', 'vision_model.encoder.layers.0.post_attention_layernorm.bias', 'vision_model.encoder.layers.14.input_layernorm.weight', 'vision_model.encoder.layers.15.self_attn.query_key_value.weight', 'abstractor.encoder.layers.3.crossattention.normk.weight', 'vision_model.encoder.layers.19.self_attn.query_key_value.bias', 'vision_model.encoder.layers.20.post_attention_layernorm.bias', 'abstractor.encoder.layers.4.crossattention.normk.bias', 'vision_model.encoder.layers.23.input_layernorm.weight', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'language_model.model.layers.5.input_layernorm.weight', 'language_model.model.layers.14.self_attn.o_proj.weight', 'language_model.model.layers.26.input_layernorm.weight', 'vision_model.encoder.layers.15.input_layernorm.weight', 'abstractor.visual_fc.bias', 'vision_model.encoder.layers.4.post_attention_layernorm.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.w2.weight', 'language_model.model.layers.1.mlp.up_proj.weight', 'vision_model.encoder.layers.20.self_attn.dense.weight', 'language_model.model.layers.0.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.29.self_attn.v_proj.weight', 'vision_model.encoder.layers.10.input_layernorm.weight', 'language_model.model.layers.4.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.post_attention_layernorm.weight', 'abstractor.vit_eos', 'language_model.model.layers.1.self_attn.o_proj.weight', 'vision_model.encoder.layers.11.post_attention_layernorm.weight', 'language_model.model.layers.5.self_attn.o_proj.weight', 'vision_model.encoder.layers.9.post_attention_layernorm.weight', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'abstractor.encoder.layers.4.crossattention.output.norm2.weight', 'vision_model.encoder.layers.19.post_attention_layernorm.weight', 'vision_model.encoder.layers.21.self_attn.query_key_value.weight', 'abstractor.encoder.layers.0.crossattention.attention.value.weight', 'abstractor.encoder.layers.1.crossattention.attention.key.weight', 'abstractor.encoder.layers.5.crossattention.attention.value.bias', 'abstractor.encoder.layers.2.crossattention.output.out_proj.bias', 'language_model.model.layers.2.input_layernorm.weight', 'language_model.model.layers.22.mlp.down_proj.weight', 'language_model.model.layers.20.mlp.up_proj.weight', 'language_model.model.layers.26.post_attention_layernorm.weight', 'language_model.model.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.post_attention_layernorm.bias', 'language_model.model.layers.28.self_attn.o_proj.weight', 'vision_model.encoder.layers.18.input_layernorm.weight', 'vision_model.post_layernorm.bias', 'language_model.model.layers.13.mlp.gate_proj.weight', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'abstractor.encoder.layers.3.crossattention.attention.value.weight', 'language_model.model.layers.14.post_attention_layernorm.weight', 'language_model.model.layers.28.post_attention_layernorm.weight', 'abstractor.encoder.layers.0.crossattention.output.norm2.weight', 'language_model.model.layers.24.self_attn.o_proj.weight', 'language_model.model.layers.19.mlp.down_proj.weight', 'vision_model.encoder.layers.13.self_attn.query_key_value.bias', 'language_model.model.layers.9.mlp.gate_proj.weight', 'abstractor.encoder.layers.1.crossattention.attention.query.bias', 'language_model.model.layers.8.mlp.down_proj.weight', 'language_model.model.layers.1.post_attention_layernorm.weight', 'vision_model.encoder.layers.15.self_attn.query_key_value.bias', 'abstractor.encoder.layers.2.crossattention.attention.value.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'language_model.model.layers.5.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.5.mlp.down_proj.weight', 'language_model.model.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'language_model.model.layers.8.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.16.self_attn.v_proj.weight', 'language_model.model.layers.25.mlp.up_proj.weight', 'abstractor.encoder.layers.3.crossattention.output.mlp.w3.weight', 'language_model.model.layers.25.self_attn.q_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.w3.bias', 'vision_model.encoder.layers.4.input_layernorm.bias', 'language_model.model.layers.19.self_attn.q_proj.weight', 'language_model.model.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.query_key_value.bias', 'language_model.model.layers.24.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.query_key_value.bias', 'vision_model.encoder.layers.22.self_attn.query_key_value.bias', 'abstractor.encoder.layers.0.crossattention.output.mlp.w2.weight', 'vision_model.encoder.layers.2.self_attn.dense.bias', 'vision_model.encoder.layers.11.self_attn.dense.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.17.self_attn.query_key_value.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'abstractor.encoder.layers.3.crossattention.attention.value.bias', 'vision_model.encoder.layers.7.post_attention_layernorm.bias', 'vision_model.encoder.layers.14.self_attn.dense.bias', 'vision_model.encoder.layers.3.self_attn.query_key_value.weight', 'vision_model.encoder.layers.21.input_layernorm.weight', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.w3.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.7.self_attn.query_key_value.bias', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'abstractor.encoder.layers.1.crossattention.normk.weight', 'vision_model.encoder.layers.11.self_attn.query_key_value.bias', 'language_model.model.layers.31.mlp.up_proj.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.w1.bias', 'language_model.model.layers.24.post_attention_layernorm.weight', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'language_model.model.layers.11.mlp.gate_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.ffn_ln.weight', 'language_model.model.layers.29.self_attn.o_proj.weight', 'abstractor.encoder.layers.0.crossattention.normk.weight', 'language_model.model.layers.18.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.8.self_attn.o_proj.weight', 'language_model.model.layers.8.post_attention_layernorm.weight', 'vision_model.encoder.layers.8.input_layernorm.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'language_model.model.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'language_model.model.layers.21.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.6.post_attention_layernorm.bias', 'language_model.model.layers.0.post_attention_layernorm.weight', 'language_model.model.layers.18.self_attn.o_proj.weight', 'abstractor.encoder.layers.5.crossattention.output.norm2.bias', 'language_model.model.layers.30.mlp.up_proj.weight', 'abstractor.encoder.layers.2.crossattention.output.norm2.bias', 'abstractor.encoder.layers.5.crossattention.normk.bias', 'language_model.model.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'language_model.model.embed_tokens.weight', 'abstractor.encoder.layers.4.crossattention.output.mlp.w1.bias', 'language_model.model.layers.11.self_attn.v_proj.weight', 'language_model.model.layers.25.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.5.self_attn.k_proj.weight', 'language_model.model.layers.11.mlp.up_proj.weight', 'language_model.model.layers.12.mlp.up_proj.weight', 'language_model.model.layers.27.self_attn.rotary_emb.inv_freq', 'vision_model.embeddings.cls_token', 'language_model.model.layers.29.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.input_layernorm.bias', 'language_model.model.layers.30.self_attn.q_proj.weight', 'language_model.model.layers.14.mlp.up_proj.weight', 'language_model.model.layers.26.self_attn.q_proj.weight', 'language_model.model.layers.29.input_layernorm.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'language_model.model.layers.3.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.30.mlp.gate_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'language_model.model.layers.22.self_attn.k_proj.weight', 'language_model.model.layers.29.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.19.self_attn.dense.weight', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.8.post_attention_layernorm.weight', 'language_model.model.layers.7.self_attn.q_proj.weight', 'language_model.model.layers.29.mlp.gate_proj.weight', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.12.self_attn.query_key_value.weight', 'language_model.model.layers.0.self_attn.q_proj.weight', 'language_model.model.layers.4.mlp.up_proj.weight', 'language_model.model.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.7.post_attention_layernorm.weight', 'vision_model.encoder.layers.9.input_layernorm.bias', 'vision_model.encoder.layers.22.input_layernorm.bias', 'language_model.model.layers.19.self_attn.v_proj.weight', 'language_model.model.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.dense.bias', 'vision_model.encoder.layers.14.self_attn.dense.weight', 'language_model.model.layers.16.self_attn.q_proj.weight', 'language_model.model.layers.30.self_attn.o_proj.weight', 'abstractor.encoder.layers.2.crossattention.normk.weight', 'language_model.model.layers.16.input_layernorm.weight', 'language_model.model.layers.24.self_attn.k_proj.weight', 'language_model.model.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.20.input_layernorm.bias', 'language_model.model.layers.4.post_attention_layernorm.weight', 'abstractor.encoder.layers.3.crossattention.attention.query.bias', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'abstractor.encoder.layers.5.crossattention.output.mlp.w2.weight', 'language_model.model.layers.7.post_attention_layernorm.weight', 'abstractor.encoder.layers.0.crossattention.output.mlp.w1.bias', 'vision_model.encoder.layers.3.post_attention_layernorm.weight', 'language_model.model.layers.1.self_attn.v_proj.weight', 'language_model.model.layers.14.self_attn.q_proj.weight', 'language_model.model.layers.0.input_layernorm.weight', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'language_model.model.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.post_attention_layernorm.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'abstractor.encoder.layers.1.crossattention.output.mlp.w3.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'language_model.model.layers.30.post_attention_layernorm.weight', 'language_model.model.layers.31.mlp.gate_proj.weight', 'vision_model.encoder.layers.4.self_attn.query_key_value.weight', 'abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.bias', 'abstractor.encoder.layers.3.crossattention.norm1.bias', 'abstractor.encoder.layers.5.crossattention.output.out_proj.bias', 'language_model.model.layers.18.self_attn.v_proj.weight', 'language_model.model.norm.weight', 'vision_model.encoder.layers.3.input_layernorm.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'language_model.model.layers.28.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.self_attn.dense.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.w1.weight', 'vision_model.encoder.layers.9.self_attn.dense.weight', 'abstractor.encoder.layers.4.crossattention.norm1.weight', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'language_model.model.layers.26.self_attn.v_proj.weight', 'language_model.model.layers.11.input_layernorm.weight', 'abstractor.encoder.layers.2.crossattention.norm1.bias', 'vision_model.encoder.layers.16.self_attn.dense.weight', 'language_model.model.layers.16.post_attention_layernorm.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'language_model.model.layers.3.self_attn.q_proj.weight', 'language_model.model.layers.16.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.21.mlp.down_proj.weight', 'language_model.model.layers.23.input_layernorm.weight', 'vision_model.encoder.layers.17.self_attn.dense.weight', 'language_model.model.layers.27.self_attn.v_proj.weight', 'language_model.model.layers.14.self_attn.v_proj.weight', 'abstractor.encoder.layers.5.crossattention.norm1.weight', 'vision_model.encoder.layers.6.self_attn.dense.bias', 'vision_model.encoder.layers.14.post_attention_layernorm.weight', 'vision_model.encoder.layers.3.self_attn.dense.bias', 'language_model.model.layers.18.post_attention_layernorm.weight', 'vision_model.encoder.layers.8.input_layernorm.bias', 'language_model.model.layers.19.self_attn.o_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.14.post_attention_layernorm.bias', 'abstractor.encoder.layers.0.crossattention.output.mlp.ffn_ln.weight', 'abstractor.encoder.layers.5.crossattention.normk.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'language_model.model.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'abstractor.encoder.layers.0.crossattention.norm1.bias', 'vision_model.encoder.layers.6.self_attn.query_key_value.weight', 'abstractor.encoder.layers.4.crossattention.norm1.bias', 'vision_model.encoder.layers.12.post_attention_layernorm.bias', 'language_model.model.layers.14.input_layernorm.weight', 'abstractor.encoder.layers.4.crossattention.output.out_proj.bias', 'language_model.model.layers.25.self_attn.v_proj.weight', 'language_model.model.layers.15.self_attn.rotary_emb.inv_freq', 'abstractor.encoder.layers.2.crossattention.attention.query.weight', 'language_model.model.layers.20.mlp.down_proj.weight', 'vision_model.encoder.layers.3.post_attention_layernorm.bias', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.21.self_attn.dense.weight', 'vision_model.encoder.layers.4.self_attn.query_key_value.bias', 'vision_model.encoder.layers.19.input_layernorm.bias', 'abstractor.encoder.layers.0.crossattention.attention.key.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.8.self_attn.query_key_value.weight', 'abstractor.encoder.layers.3.crossattention.normk.bias', 'language_model.model.layers.25.input_layernorm.weight', 'abstractor.encoder.layers.5.crossattention.norm1.bias', 'language_model.model.layers.30.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.7.input_layernorm.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'abstractor.encoder.layers.1.crossattention.output.norm2.bias', 'language_model.model.layers.13.mlp.up_proj.weight', 'abstractor.encoder.layers.3.crossattention.norm1.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.w1.weight', 'vision_model.encoder.layers.12.mlp.fc2.weight', 'language_model.model.layers.17.input_layernorm.weight', 'abstractor.encoder.layers.4.crossattention.output.mlp.w3.weight', 'language_model.model.layers.12.input_layernorm.weight', 'vision_model.encoder.layers.12.self_attn.dense.weight', 'vision_model.encoder.layers.0.self_attn.query_key_value.weight', 'vision_model.encoder.layers.18.self_attn.query_key_value.weight', 'language_model.model.layers.20.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.23.self_attn.k_proj.weight', 'language_model.model.layers.20.self_attn.v_proj.weight', 'language_model.model.layers.12.self_attn.k_proj.weight', 'language_model.model.layers.19.mlp.gate_proj.weight', 'language_model.model.layers.13.mlp.down_proj.weight', 'language_model.model.layers.20.mlp.gate_proj.weight', 'abstractor.encoder.layers.3.crossattention.attention.query.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.13.post_attention_layernorm.weight', 'vision_model.encoder.layers.16.post_attention_layernorm.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.bias', 'language_model.model.layers.9.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'language_model.model.layers.6.self_attn.o_proj.weight', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'language_model.model.layers.4.mlp.gate_proj.weight', 'abstractor.encoder.layers.4.crossattention.output.mlp.w2.bias', 'abstractor.encoder.layers.1.crossattention.output.norm2.weight', 'language_model.model.layers.28.mlp.up_proj.weight', 'vision_model.encoder.layers.12.input_layernorm.weight', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.dense.bias', 'language_model.model.layers.8.input_layernorm.weight', 'language_model.model.layers.31.self_attn.q_proj.weight', 'language_model.model.layers.18.input_layernorm.weight', 'language_model.model.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.2.self_attn.query_key_value.weight', 'abstractor.encoder.layers.0.crossattention.output.mlp.w2.bias', 'language_model.model.layers.11.mlp.down_proj.weight', 'vision_model.encoder.layers.12.input_layernorm.bias', 'language_model.model.layers.24.mlp.down_proj.weight', 'language_model.model.layers.28.mlp.down_proj.weight', 'abstractor.encoder.layers.0.crossattention.attention.query.weight', 'language_model.model.layers.22.mlp.up_proj.weight', 'vision_model.encoder.layers.3.input_layernorm.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.17.post_attention_layernorm.weight', 'language_model.model.layers.1.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.7.input_layernorm.weight', 'language_model.model.layers.4.self_attn.o_proj.weight', 'vision_model.encoder.layers.16.self_attn.dense.bias', 'language_model.model.layers.2.mlp.gate_proj.weight', 'vision_model.encoder.layers.3.self_attn.dense.weight', 'vision_model.encoder.layers.1.self_attn.query_key_value.weight', 'vision_model.encoder.layers.17.post_attention_layernorm.bias', 'vision_model.encoder.layers.16.self_attn.query_key_value.weight', 'vision_model.encoder.layers.4.input_layernorm.weight', 'vision_model.encoder.layers.16.input_layernorm.weight', 'abstractor.encoder.layers.0.crossattention.attention.value.bias', 'abstractor.encoder.layers.3.crossattention.output.mlp.w1.bias', 'abstractor.encoder.layers.5.crossattention.output.out_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'language_model.model.layers.7.mlp.up_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'language_model.model.layers.24.self_attn.v_proj.weight', 'language_model.model.layers.26.mlp.down_proj.weight', 'language_model.model.layers.27.mlp.down_proj.weight', 'vision_model.encoder.layers.15.post_attention_layernorm.weight', 'abstractor.encoder.layers.2.crossattention.norm1.weight', 'language_model.model.layers.23.self_attn.rotary_emb.inv_freq', 'vision_model.encoder.layers.15.self_attn.dense.bias', 'language_model.model.layers.20.input_layernorm.weight', 'vision_model.encoder.layers.18.post_attention_layernorm.weight', 'language_model.model.layers.13.self_attn.o_proj.weight', 'vision_model.encoder.layers.23.post_attention_layernorm.bias', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'abstractor.encoder.layers.4.crossattention.attention.query.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.ffn_ln.weight', 'language_model.model.layers.18.mlp.gate_proj.weight', 'language_model.model.layers.29.post_attention_layernorm.weight', 'language_model.model.layers.8.mlp.gate_proj.weight', 'abstractor.encoder.layers.1.crossattention.output.out_proj.weight', 'language_model.model.layers.26.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.9.post_attention_layernorm.weight', 'vision_model.encoder.layers.10.self_attn.query_key_value.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.ffn_ln.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.w1.bias', 'language_model.model.layers.14.self_attn.k_proj.weight', 'language_model.model.layers.26.mlp.gate_proj.weight', 'vision_model.encoder.layers.19.self_attn.query_key_value.weight', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'abstractor.encoder.layers.3.crossattention.output.mlp.ffn_ln.weight', 'abstractor.encoder.layers.4.crossattention.output.mlp.w3.bias', 'vision_model.encoder.layers.21.post_attention_layernorm.weight', 'vision_model.encoder.layers.22.post_attention_layernorm.bias', 'abstractor.encoder.layers.1.crossattention.attention.query.weight', 'language_model.model.layers.4.self_attn.q_proj.weight', 'language_model.model.layers.10.input_layernorm.weight', 'language_model.model.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.post_attention_layernorm.weight', 'abstractor.encoder.layers.1.crossattention.output.mlp.w1.bias', 'language_model.model.layers.0.mlp.up_proj.weight', 'language_model.model.layers.28.self_attn.q_proj.weight', 'language_model.model.layers.10.post_attention_layernorm.weight', 'vision_model.encoder.layers.19.post_attention_layernorm.bias', 'vision_model.encoder.layers.13.input_layernorm.weight', 'language_model.model.layers.6.post_attention_layernorm.weight', 'language_model.model.layers.15.self_attn.k_proj.weight', 'language_model.model.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.dense.bias', 'abstractor.encoder.layers.3.crossattention.output.mlp.w2.bias', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'abstractor.encoder.layers.0.crossattention.output.out_proj.bias', 'language_model.model.layers.10.self_attn.rotary_emb.inv_freq', 'query_tokens', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.15.self_attn.dense.weight', 'vision_model.encoder.layers.9.self_attn.query_key_value.weight', 'abstractor.encoder.layers.2.crossattention.output.mlp.w2.weight', 'language_model.model.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.input_layernorm.bias', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'language_model.model.layers.25.self_attn.o_proj.weight', 'vision_model.encoder.layers.7.self_attn.query_key_value.weight', 'vision_model.encoder.layers.18.self_attn.dense.weight', 'language_model.model.layers.2.self_attn.rotary_emb.inv_freq', 'language_model.model.layers.31.self_attn.o_proj.weight', 'abstractor.encoder.layers.4.crossattention.output.out_proj.weight', 'abstractor.encoder.layers.3.crossattention.output.mlp.w2.weight', 'vision_model.encoder.layers.0.input_layernorm.bias', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'language_model.model.layers.21.self_attn.k_proj.weight', 'language_model.model.layers.1.mlp.down_proj.weight', 'language_model.model.layers.7.self_attn.rotary_emb.inv_freq', 'abstractor.encoder.layers.4.crossattention.output.mlp.ffn_ln.weight', 'abstractor.encoder.layers.4.crossattention.attention.key.weight', 'abstractor.encoder.layers.0.crossattention.output.out_proj.weight', 'vision_model.encoder.layers.15.post_attention_layernorm.bias', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'abstractor.encoder.layers.1.crossattention.attention.key.bias', 'vision_model.encoder.layers.10.self_attn.dense.weight', 'abstractor.encoder.layers.2.crossattention.attention.value.weight', 'language_model.model.layers.9.self_attn.k_proj.weight', 'language_model.model.layers.13.self_attn.v_proj.weight', 'language_model.model.layers.0.self_attn.o_proj.weight', 'language_model.model.layers.4.self_attn.k_proj.weight', 'language_model.model.layers.14.mlp.gate_proj.weight', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'abstractor.encoder.layers.4.crossattention.output.mlp.w1.weight', 'abstractor.encoder.layers.5.crossattention.output.mlp.w2.bias', 'language_model.model.layers.25.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.dense.weight', 'language_model.model.layers.11.self_attn.o_proj.weight', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'language_model.model.layers.2.mlp.down_proj.weight', 'language_model.model.layers.19.self_attn.rotary_emb.inv_freq', 'abstractor.encoder.layers.2.crossattention.attention.query.bias', 'language_model.model.layers.17.self_attn.k_proj.weight', 'language_model.model.layers.23.mlp.gate_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.15.input_layernorm.bias', 'language_model.model.layers.3.self_attn.o_proj.weight', 'vision_model.encoder.layers.4.post_attention_layernorm.bias', 'abstractor.encoder.layers.1.crossattention.attention.value.weight', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'abstractor.visual_fc.weight', 'abstractor.encoder.layers.3.crossattention.output.norm2.bias', 'language_model.model.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'language_model.model.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.input_layernorm.bias', 'language_model.model.layers.21.self_attn.o_proj.weight', 'vision_model.encoder.layers.20.post_attention_layernorm.weight', 'vision_model.encoder.layers.22.input_layernorm.weight', 'language_model.model.layers.3.mlp.down_proj.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.`

encounter the same issue, did you solve it

lambertjf commented 1 year ago

encounter the same issue, did you solve it

Unfortunately not

laserwave commented 1 year ago

encounter the same issue, did you solve it

Unfortunately not

I solved it. Try modifing the following code in model_worker.py so as to run the web server. The base model should be original model directory without lora finetuning.

class mPLUG_Owl_Server:
    def __init__(
        self, 
        base_model='MAGAer13/mplug-owl-llama-7b',
        log_dir='./',
        load_in_8bit=False,
        bf16=True,
        device="cuda",
        io=None
    ):
        self.log_dir = log_dir
        self.image_processor = MplugOwlImageProcessor.from_pretrained(base_model)
        self.tokenizer = AutoTokenizer.from_pretrained(base_model)
        self.processor = MplugOwlProcessor(self.image_processor, self.tokenizer)
        self.model = MplugOwlForConditionalGeneration.from_pretrained(
            base_model,
            load_in_8bit=load_in_8bit,
            torch_dtype=torch.bfloat16 if bf16 else torch.half,
            device_map="auto"
        )
        self.tokenizer = self.processor.tokenizer
        peft_config = LoraConfig(
            target_modules=r'.*language_model.*\.(q_proj|v_proj)', inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.05
        )
        self.model = get_peft_model(self.model, peft_config)
        self.model.print_trainable_parameters()
        lora_path = './lora/checkpoint-5000/pytorch_model.bin'
        print('load lora from {}'.format(lora_path))
        prefix_state_dict = torch.load(lora_path, map_location='cpu')
        self.model.load_state_dict(prefix_state_dict)
lambertjf commented 1 year ago

encounter the same issue, did you solve it

Unfortunately not

I solved it. Try modifing the following code in model_worker.py so as to run the web server. The base model should be original model directory without lora finetuning.

class mPLUG_Owl_Server:
    def __init__(
        self, 
        base_model='MAGAer13/mplug-owl-llama-7b',
        log_dir='./',
        load_in_8bit=False,
        bf16=True,
        device="cuda",
        io=None
    ):
        self.log_dir = log_dir
        self.image_processor = MplugOwlImageProcessor.from_pretrained(base_model)
        self.tokenizer = AutoTokenizer.from_pretrained(base_model)
        self.processor = MplugOwlProcessor(self.image_processor, self.tokenizer)
        self.model = MplugOwlForConditionalGeneration.from_pretrained(
            base_model,
            load_in_8bit=load_in_8bit,
            torch_dtype=torch.bfloat16 if bf16 else torch.half,
            device_map="auto"
        )
        self.tokenizer = self.processor.tokenizer
        peft_config = LoraConfig(
            target_modules=r'.*language_model.*\.(q_proj|v_proj)', inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.05
        )
        self.model = get_peft_model(self.model, peft_config)
        self.model.print_trainable_parameters()
        lora_path = './lora/checkpoint-5000/pytorch_model.bin'
        print('load lora from {}'.format(lora_path))
        prefix_state_dict = torch.load(lora_path, map_location='cpu')
        self.model.load_state_dict(prefix_state_dict)

Worked for me, thanks so much