huggingface / nanotron

Minimalistic large language model 3D-parallelism training
Apache License 2.0
1.14k stars 107 forks source link

Llama3 conversion scripts 🦙 #174

Open TJ-Solergibert opened 4 months ago

TJ-Solergibert commented 4 months ago

Hello,

In this PR, I include the scripts to convert the checkpoints of Llama3 8B & 70B to Nanotron. Although there are still some details to be polished, the current status is as follows:

All conversions are carried out in BFLOAT16 and on the CPU, but we will need at least one GPU because the ParallelContext requires it. The 8B model fits on a GPU with 80GB, but the 70B model does not. Even so, in ALL conversions, we will set DP=PP=TP=1. I have confirmed that Nanotron supports changing the TP topology, although while waiting for GPUs in my cluster, I developed a fancy script with broadcast, scatter, and gathers to perform the conversion with TP>1. I have also tried a dummy Finetune with TP=2 from the TP=1 8B converted checkpoint to store it back with TP=2, checked the results in Nanotron (correct, results below), and then converted it back to HF with the result still being correct. I have attempted to experiment with all possible cases I think.

Included

Results & Precision

It is impossible for the two models (HF & Nanotron) to produce exactly the same logits with a level of precision capable of passing the assert_close test. This is true both at the model level and at the layer level because, despite having the same parameters, the two models perform different operations. Different in the sense of...

I have a (somewhat catastrophic) notebook where the differences at each operation level are evident. But what is really important is not so much the logits but the predictions and their order. To verify this, I developed the generate_XXXX.py scripts that, from the same prompt for the desired tokens, print the 10 most probable predictions and print an accuracy value of all the sequence. I chose a fixed prompt to 1. Ensure manually that the predictions makes sense 2. Compare through the different converted models. The following table shows the accuracy results for different configurations.

Experiment Backend Size TP Accuracy
OG HF HF 8 1 0.83
OG HF --> Nanotron Nanotron 8 1 0.83
OG HF --> Nanotron --> HF HF 8 1 0.83
OG HF HF 70 2 0.89
OG HF --> Nanotron Nanotron 70 2 0.83
OG HF --> Nanotron --> HF HF 70 2 0.89
HF -> Nanotron -> Dummy Finetune to change TP=2 -> HF HF 8 1 --> 2 0.83

It is worth noting that:

  1. For the 70B model, when using the HF backend with AutoModelForCausalLM.from_pretrained() there is NO tensor parallelism, while in Nanotron there is.
  2. The accuracy values are from the prediction of 512 tokens.

Details

This PR is build with #168 FA2 kernel, which is the same as in the HF implementation.

After extensive reverse engineering, I found a critical point that was significantly different from the HuggingFace implementation: RoPE. After numerous tests, even transferring the RoPE from the HF implementation, it turns out that there are 2 fundamental parameters of the FlashRotaryEmbedding layer:

I have included both values in LlamaConfig, with the OLD values as defaults, although I propose at least changing the interleaved default to False.

In essence, to perform the conversions, we initialize the two implementations (HuggingFace & Nanotron) and copy the parameters layer by layer. After trying several methods to copy the weights, I opted for the copy_ method, because this way we preserve the ShardedInfo & TiedInfo of all the NanotronParameters.

The conversion from HF to Nanotron is fast, taking 2 and 16 minutes for the 8B and 70B models respectively. However, the conversion from Nanotron to HF extends to 5 and 51 minutes respectively. This is due to the initialization of the HF model (AutoModelForCausalLM.from_config()).

When converting to Nanotron, we also store the tokenizer (as in the HF models) and generate a config.yaml with the basic configurations and parameters to start training from the checkpoint. Additionally, the conversions include assertions on all parameters to ensure that we are copying the parameters correctly and making the process as explicit as possible for the conversion of future models.

TODO

Instructions

In the header of all the files there are instructions, I recommend the following commands to launch the evaluations and conversions.

torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path  meta-llama/Meta-Llama-3-8B-Instruct
torchrun --nproc-per-node 1 tools/llama3/convert_hf_to_nanotron.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama3-8B --pretrained-model-name-or-path meta-llama/Meta-Llama-3-8B-Instruct
torchrun --nproc-per-node 2 tools/llama3/generate_nanotron_predictions.py --tp 2 --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama3-8B
torchrun --nproc-per-node 1 tools/llama3/convert_nanotron_to_hf.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama3-8B --hugging-face-checkpoint-path hf_checkpoints/ConvertedNanotronLlama3-8B
torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path hf_checkpoints/ConvertedNanotronLlama3-8B
xrsrke commented 4 months ago

@TJ-Solergibert. Thanks for the PR, have you tried continually pretraining or finetuning a llama3-converted checkpoint to Nanotron? I encountered some exploding gradient issues in my experience (not in your PR)

TJ-Solergibert commented 4 months ago

Hi @xrsrke ,

After your comments about exploding gradient issues I've run the following:

  1. Preprocessed the DKYoon/SlimPajama-6B dataset to use Nanoset
  2. Changed the TXT prompt of the generate_XXX.py scripts to a prompt generated by meta-llama/Meta-Llama-3-8B. I do this in order to get a good accuracy in the tests in order to detect flaws more easily (If we perform bad and after we perform worse it's difficult from where does this decrease comes from).
  3. Run generate_hf_predictions.py for the base Llama-3-8B model and we get 0.888671875 of accuracy: torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path models/Meta-Llama-3-8B
  4. Convert the checkpoint to Nanotron, 2 minutes: torchrun --nproc-per-node 1 tools/llama3/convert_hf_to_nanotron.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B --pretrained-model-name-or-path models/Meta-Llama-3-8B
  5. Generate Nanotron predictions with generate_nanotron_predictions.py with TP = 1 & TP = 2: torchrun --nproc-per-node 1 tools/llama3/generate_nanotron_predictions.py --tp 1 --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B We get 0.888671875 & 0.869140625 with TP = 1 & TP = 2 respectively. This difference is due to TP and what I explained about shapes and GEMMs.
  6. Run a fine-tune for 500 steps with TP = 2 and 256000 tokens. The logs of the run are here. I don't see any issues.
  7. Then I run generate_nanotron_predictions.py with the new checkpoint with TP = 2. The accuracy is very very low. Something is happening.
    • First I rerun the experiment for just 5 steps. Accuracy is still very very low.
    • I try with PP = 2 & TP = 1 to try if it's a problem with TP. This doesn't makes much sense because as I've said, we can run the Nanotron generations with different TP sizes and also the 70B model is converted to a TP = DP = PP = 1 checkpoint and it works converting the model in both directions + the generations. The accuracy still sucks.
    • Finally, I reduce the learning rate. This was the actual problem, as I was using the default one. I set a very low value and train for 100 iterations. The logs are also in W&B.
  8. Run predictions with the fine-tuned model. We get 0.876953125 & 0.86328125 with TP = 1 and TP = 2 respectively. torchrun --nproc-per-node 2 tools/llama3/generate_nanotron_predictions.py --tp 2 --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B-finetuned/100
  9. Convert back to HuggingFace: torchrun --nproc-per-node 1 tools/llama3/convert_nanotron_to_hf.py --nanotron-checkpoint-path nanotron_checkpoints/NanotronLlama-3-8B-finetuned/100 --hugging-face-checkpoint-path models/Meta-Llama-3-8B-finetuned
  10. Run HuggingFace generations and got 0.880859375 accuracy: torchrun --nproc-per-node 1 tools/llama3/generate_hf_predictions.py --pretrained-model-name-or-path models/Meta-Llama-3-8B-finetuned

So I haven't experienced any problem, let me know if I should look into anything more!

Toni

PD: We could upload Nanotron Llama3 checkpoints to the Hub, right? PPD: In W&B I've included the results of a dummy run with 5000 steps.

3outeille commented 3 months ago

Nice PR, when loading llama3 from HF to nanotron, I had to change the rotary embedding (https://github.com/huggingface/nanotron/pull/199/commits/31c12e86f83052a85caedcf978ae6373ede43cda) otherwise the generation was not good

TJ-Solergibert commented 3 months ago

Hi,

I just took care of the "training case". As you can see, there are 2 RotaryEmbedding layers: self.rotary_embedding & self.flash_rotary_embedding. The first one is just used in the "inference case", while the last is just for the training case. The interleaved thing is just for the RotaryEmbedding of flash-attention, I don't see the point of this and this for example, as they belong to the inference case.

For training, the RotaryEmbedding of flash-attention needs interleaved=True to match the HuggingFace implementation. Now, for the inference case, you can include other RotaryEmbedding layers for sure, but I don't understand why would we have 3 different RotaryEmbedding layers and take care of the interleaved thing for the inference case. We should have at maximum 2, right?

TJ-Solergibert commented 2 months ago

Sorry, there were 68 commits that IDK how they ended up here 😅. All your comments are respect those commits. Now the branch only contains the files relative to the Llama3 converter scripts. Before merging I should delete the generate_XX scripts that are just for debug purposes.

The conflicts are related to the interleaved parameter in the RoPE embeddings. As we discussed, it's necessary to set this parameter to False to load llama3 pretrained checkpoints but the default value of nanotron has been always True, so I set the default value to True.

Let me know if there is still any issue!

zzhhjjj commented 2 months ago

add some instructions for downloading the weights?