NVIDIA / Megatron-LM

Ongoing research training transformer models at scale
https://docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start
Other
9.87k stars 2.23k forks source link

Huggingface <-> Megatron-LM Compatibility #37

Closed usuyama closed 2 weeks ago

usuyama commented 4 years ago

Looking for a way to convert model weights between huggingface and Megatron-LM. (1): Continual pretraining from pretrained weights from huggingface (2): Convert Megatron-LM model weights to huggingface

It shouldn't be too difficult to adjust layer names/weights, but I'm hoping someone has already done this.

Related #3 (already closed but couldn't find the solution)

usuyama commented 4 years ago

hmm it seems not so straight forward to convert to huggingface format. At least, I think LayerNorms locations don't match.

Megatron-LM model structure:

BertModel(
  (language_model): TransformerLanguageModel(
    (embedding): Embedding(
      (word_embeddings): VocabParallelEmbedding()
      (position_embeddings): Embedding(512, 768)
      (tokentype_embeddings): Embedding(2, 768)
      (embedding_dropout): Dropout(p=0.1, inplace=False)
    )
    (transformer): ParallelTransformer(
      (layers): ModuleList(
        (0): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (1): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (2): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (3): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (4): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (5): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (6): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (7): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (8): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (9): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (10): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (11): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
      (final_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
    )
    (pooler): Pooler(
      (dense): Linear(in_features=768, out_features=768, bias=True)
    )
  )
  (lm_head): BertLMHead(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
  )
  (binary_head): Linear(in_features=768, out_features=2, bias=True)
)
usuyama commented 4 years ago

for reference, huggingface BertModel

BertModel(
  (embeddings): BertEmbeddings(
    (word_embeddings): Embedding(30522, 768, padding_idx=0)
    (position_embeddings): Embedding(512, 768)
    (token_type_embeddings): Embedding(2, 768)
    (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
    (dropout): Dropout(p=0.1, inplace=False)
  )
  (encoder): BertEncoder(
    (layer): ModuleList(
      (0): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (1): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (2): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (3): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (4): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (5): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (6): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (7): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (8): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (9): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (10): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
  )
  (pooler): BertPooler(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (activation): Tanh()
  )
)
usuyama commented 4 years ago

Any thoughts/advice? @jaredcasper @PyxAI @harkous @raulpuric

Beomi commented 3 years ago

Any updates?

vdabravolski commented 3 years ago

Was interested in the same questions, @usuyama. See excerpt from Megatron paper. Does look like Megatron<->HF will require some updates on HF side. image

usuyama commented 3 years ago

Thanks, @vdabravolski

Need to check the forward function for details, but the order of weights looks different as you pointed out.

Megatron-LM

        (11): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )

HuggingFace

      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
amirj commented 3 years ago

I have the same question. Any new update?

moyix commented 3 years ago

Curious about this too – I have a GPT2 model trained with Megatron and would love to get it imported into HF.

haven-jeon commented 3 years ago

In order to convert the Megatron GPT2 model to HF(huggingface transformers) GPT2, a layer level parameter conversion was performed and verification was conducted, but the conversion was not performed properly.

The following is the core concept of transformation.

Megatron GPT2 transformer layer and shape

layers.0.input_layernorm.weight, shape: torch.Size([1920])
layers.0.input_layernorm.bias, shape: torch.Size([1920])
layers.0.attention.query_key_value.weight, shape: torch.Size([5760, 1920])  # need transpose
layers.0.attention.query_key_value.bias, shape: torch.Size([5760])
layers.0.attention.dense.weight, shape: torch.Size([1920, 1920])
layers.0.attention.dense.bias, shape: torch.Size([1920])
layers.0.post_attention_layernorm.weight, shape: torch.Size([1920])
layers.0.post_attention_layernorm.bias, shape: torch.Size([1920])
layers.0.mlp.dense_h_to_4h.weight, shape: torch.Size([7680, 1920])   # need transpose
layers.0.mlp.dense_h_to_4h.bias, shape: torch.Size([7680])
layers.0.mlp.dense_4h_to_h.weight, shape: torch.Size([1920, 7680])  # need transpose
layers.0.mlp.dense_4h_to_h.bias, shape: torch.Size([1920])

HF GPT2 transformer layer and shape

transformer.h.0.ln_1.weight, shape: torch.Size([1920])
transformer.h.0.ln_1.bias, shape: torch.Size([1920])
transformer.h.0.attn.bias, shape: torch.Size([1, 1, 1920, 1920])
transformer.h.0.attn.masked_bias, shape: torch.Size([])
transformer.h.0.attn.c_attn.weight, shape: torch.Size([1920, 5760])
transformer.h.0.attn.c_attn.bias, shape: torch.Size([5760])
transformer.h.0.attn.c_proj.weight, shape: torch.Size([1920, 1920])
transformer.h.0.attn.c_proj.bias, shape: torch.Size([1920])
transformer.h.0.ln_2.weight, shape: torch.Size([1920])
transformer.h.0.ln_2.bias, shape: torch.Size([1920])
transformer.h.0.mlp.c_fc.weight, shape: torch.Size([1920, 7680])
transformer.h.0.mlp.c_fc.bias, shape: torch.Size([7680])
transformer.h.0.mlp.c_proj.weight, shape: torch.Size([7680, 1920])
transformer.h.0.mlp.c_proj.bias, shape: torch.Size([1920])

In the case of attn.bias and masked_bias, they were the same as the values ​​implemented in Megatron GPT2, so they were ignored during conversion and all parameters were converted, but the generated results of HF GPT2 were different from those of Megatron GPT2.

I guess HF GPT2 and Megatron GPT2 have some different layer level implementation. If you have any ideas on this part, please let me know.

usuyama commented 3 years ago

As @vdabravolski pointed out, Megatron rearranged LayerNorm and residual connection in the transformer block. Maybe that's one difference you observed, @haven-jeon ?

haven-jeon commented 3 years ago

@usuyama, thanks for reminding me. I thought it was a part related to BERT in the paper, but looking at the Megatron-LM code, it seems to be the code shared with GPT2.

https://github.com/NVIDIA/Megatron-LM/blob/1b3dfa2ff9fe1643e15ddd1cf775abcdb2146f13/megatron/model/transformer.py#L445 This part looks different from the HF transformers. 🤔

malteos commented 2 years ago

Any news on this issue?

Symbolk commented 2 years ago

Any news on this?

chrisby commented 1 year ago

Have not tried it but this exists: https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2

github-actions[bot] commented 1 year ago

Marking as stale. No activity in 60 days. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 11 months ago

Marking as stale. No activity in 60 days.

devymex commented 10 months ago

1. Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. Convert llama-2 from Megatron-LM to HuggingFace:

Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:

  1. Set --norm-epsilon=1e-6,
  2. Do not enable --apply-query-key-layer-scaling (or enable --no-query-key-layer-scaling in older versions),
  3. Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
  4. Enable --disable-bias-linear.
TheRootOf3 commented 9 months ago

Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.

CaesarWWK commented 8 months ago

Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL>

Save llama-2 checkpoint as HuggingFace to Megatron-LM:

Step 1. Download this file to Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

Step 3. Test

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrain(<SAVE_DIR>)

Works perfectly for me. Just changed --loader=llama2-hf to --loader=megatron since we want to convert Megatron checkpoint to hf

CaesarWWK commented 8 months ago

Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.

They have script for converting GPT-2 somewhere in hf's repo transformers/models/megatron-gpt2 https://huggingface.co/docs/transformers/model_doc/megatron_gpt2

Otherwise it should be in somewhere in Megatron's repo.

github-actions[bot] commented 6 months ago

Marking as stale. No activity in 60 days.

chenfengshijie commented 6 months ago

Could you provide guidance on how to consolidate the weights of a module—specifically, ParallelMLP and Parallel Attention—into a PyTorch-compatible format? I am utilizing a tensor-parallel size greater than 1, which results in the module's parameters being distributed across different ranks. How can I aggregate these to obtain the complete set of model weights?

sudy-super commented 6 months ago

1. ラマ-2 を HuggingFace から Megatron-LM に変換します。

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. ラマ-2 を Megatron-LM から HuggingFace に変換します。

ステップ 1. このPython スクリプトをダウンロードし、次の場所に保存します。Megatron-LM/tools/checkpoint/saver_llama2_hf.py

ステップ 2. 変換を実行する

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

ただし、LLaMA-2 を MGT から HF に変換する前に、トレーニング プロセス中に MGT の次のパラメータが HF と同じデフォルト値に設定されていることを確認する必要があります。

  1. セット--norm-epsilon=1e-6
  2. 有効にしないでください--apply-query-key-layer-scaling(または--no-query-key-layer-scaling古いバージョンでは有効にします)。
  3. カスタムのattention_maskもposition_idsも、トレーニング中のMGTのGPTモデルでは効果がありません。
  4. 有効にする--disable-bias-linear

Does this conversion script support GQA?

github-actions[bot] commented 4 months ago

Marking as stale. No activity in 60 days.

babu111 commented 3 months ago

I found a script in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py anyone tried this before? It seems to convert a gpt2 from Megatron format to huggingface format

JiwenJ commented 3 months ago

1. Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. Convert llama-2 from Megatron-LM to HuggingFace:

Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:

  1. Set --norm-epsilon=1e-6,
  2. Do not enable --apply-query-key-layer-scaling (or enable --no-query-key-layer-scaling in older versions),
  3. Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
  4. Enable --disable-bias-linear.

Is this support GQA

github-actions[bot] commented 1 month ago

Marking as stale. No activity in 60 days.