NVIDIA / Megatron-LM

Ongoing research training transformer models at scale
https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start
Other
9.95k stars 2.25k forks source link

[BUG] clip key mismatch #991

Open KookHoiKim opened 1 month ago

KookHoiKim commented 1 month ago

Describe the bug I try to use LLaVA example and faced to key mismatch error. I am on latest commit in main branch. (094d66b)

[rank0]: RuntimeError: Error(s) in loading state_dict for LLaVAModel: [rank0]: Missing key(s) in state_dict: "vision_model.decoder.layers.0.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.0.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.0.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.0.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.1.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.1.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.1.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.1.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.2.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.2.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.2.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.2.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.3.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.3.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.3.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.3.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.4.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.4.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.4.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.4.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.5.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.5.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.5.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.5.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.6.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.6.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.6.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.6.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.7.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.7.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.7.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.7.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.8.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.8.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.8.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.8.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.9.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.9.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.9.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.9.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.10.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.10.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.10.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.10.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.11.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.11.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.11.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.11.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.12.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.12.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.12.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.12.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.13.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.13.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.13.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.13.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.14.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.14.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.14.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.14.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.15.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.15.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.15.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.15.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.16.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.16.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.16.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.16.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.17.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.17.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.17.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.17.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.18.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.18.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.18.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.18.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.19.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.19.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.19.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.19.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.20.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.20.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.20.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.20.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.21.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.21.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.21.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.21.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.22.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.22.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.22.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.22.mlp.linear_fc2._extra_state", "vision_model.decoder.layers.23.self_attention.linear_proj._extra_state", "vision_model.decoder.layers.23.self_attention.linear_qkv._extra_state", "vision_model.decoder.layers.23.mlp.linear_fc1._extra_state", "vision_model.decoder.layers.23.mlp.linear_fc2._extra_state".

There is another keys but it was resolved when use '--use-te-layernorm-linear' while clip converting. (It was from recent commit)

StanLei52 commented 1 month ago

Same issue. Even with '--use-te-layernorm-linear' for converting CLIP, the error still exists.