huggingface / nanotron

Minimalistic large language model 3D-parallelism training
Apache License 2.0
1.23k stars 122 forks source link

[Feature] Topology-agnostic optimizer states loading #23

Closed xrsrke closed 10 months ago

xrsrke commented 10 months ago

I started training a model with tp=4, dp=2, pp=1 for 1000 steps (call this the first config), then resumed training from the first config's checkpoint at iteration 20 (by changing the checkpoint's latest.txt to 20) with a new config tp=2, dp=2, pp=1 (call this the second config), continued to 1000 steps. I observed same training losses for zero0 and zero1 respectively.

Reproduce script

#!/bin/bash

# Define the output file
NANOTRON_DIR="/fsx/phuc/projects/nanotron"

ZERO_STAGE=${1#*=}

if [[ $ZERO_STAGE != 0 && $ZERO_STAGE != 1 ]]; then
    echo "Invalid or no --zero_stage argument provided. Please use --zero_stage=0 or --zero_stage=1."
    exit 1
fi

if [ $ZERO_STAGE -eq 0 ]; then
    CKP_SAVED_PATH="/fsx/phuc/checkpoints/nanotron-optim-loading/no_zero1_dp_2_tp4_pp1"
    CONTINUE_AT_ITERATION=20
    CKP_CONFIG_PATH="$NANOTRON_DIR/downloads/debug_optim/zero0/config_tiny_llama_dp_2_tp4_pp1_with_no_zero.yaml"
    CONTINUE_CONFIG_PATH="$NANOTRON_DIR/downloads/debug_optim/zero0/config_tiny_llama_dp_2_tp2_pp1_with_no_zero.yaml"
else
    CKP_SAVED_PATH="/fsx/phuc/checkpoints/nanotron-optim-loading/zero1/zero1_dp2_tp4_pp1"
    CONTINUE_AT_ITERATION=20
    CKP_CONFIG_PATH="$NANOTRON_DIR/downloads/debug_optim/zero1/config_llama_dp2_tp4_pp1_with_zero1.yaml"
    CONTINUE_CONFIG_PATH="$NANOTRON_DIR/downloads/debug_optim/zero1/config_llama_dp2_tp2_pp1_with_zero1.yaml"
fi

OUTPUT_FILE="training_output_zero_stage_$ZERO_STAGE.log"

# First command - Generate a checkpoint
echo "Running checkpoint generation with dp=2, tp=4, pp=1" | tee -a $OUTPUT_FILE
USE_FAST=1 CUDA_DEVICE_MAX_CONNECTIONS=1 torchrun --nproc_per_node=8 $NANOTRON_DIR/run_train.py --config-file $CKP_CONFIG_PATH 2>&1 | tee -a $OUTPUT_FILE

# Check if the previous command was successful
if [ $? -eq 0 ]; then
    echo "Checkpoint generation successful, proceeding to continue training." | tee -a $OUTPUT_FILE
else
    echo "Checkpoint generation failed, aborting script." | tee -a $OUTPUT_FILE
    exit 1
fi

# Now we modify the checkpoint's latest to $CONTINUE_AT_ITERATION
# so that we can continue training from that iteration and compare
# the training losses between the two runs
echo "Modifying the checkpoint's latest to $CONTINUE_AT_ITERATION" | tee -a $OUTPUT_FILE
echo $CONTINUE_AT_ITERATION > $CKP_SAVED_PATH/latest.txt

# Second command - Continue training from the checkpoint
echo "Continuing training from the checkpoint with dp=2 tp=2 pp=1" | tee -a $OUTPUT_FILE
USE_FAST=1 CUDA_DEVICE_MAX_CONNECTIONS=1 torchrun --nproc_per_node=4 $NANOTRON_DIR/run_train.py --config-file $CONTINUE_CONFIG_PATH 2>&1 | tee -a $OUTPUT_FILE

# Check if the previous command was successful
if [ $? -eq 0 ]; then
    echo "Training continuation successful." | tee -a $OUTPUT_FILE
else
    echo "Training continuation failed." | tee -a $OUTPUT_FILE
fi

Training logs

ZeRO-0: dp=2, tp=4, pp=1

01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 991 / 1000 | consumed_tokens: 634K | elapsed_time_per_iteration_ms: 45 | tokens_per_sec: 14.2K | tokens_per_sec_per_gpu: 1.78K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.00869 | hardware_tflops_per_gpu: 0.00869 | grad_norm: 0.515 | cuda_memory_allocated: 75.2M | cuda_max_memory_reserved: 547M | hd_total_memory_tb: 312G | hd_used_memory_tb: 61.8G | hd_free_memory_tb: 250G
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 992 / 1000 | consumed_tokens: 635K | elapsed_time_per_iteration_ms: 41.2 | tokens_per_sec: 15.5K | tokens_per_sec_per_gpu: 1.94K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0095 | hardware_tflops_per_gpu: 0.00951 | grad_norm: 0.507
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 993 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 39.4 | tokens_per_sec: 16.2K | tokens_per_sec_per_gpu: 2.03K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.00992 | hardware_tflops_per_gpu: 0.00992 | grad_norm: 0.514
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 994 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 43.2 | tokens_per_sec: 14.8K | tokens_per_sec_per_gpu: 1.85K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.00905 | hardware_tflops_per_gpu: 0.00906 | grad_norm: 0.529
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 995 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 39.8 | tokens_per_sec: 16.1K | tokens_per_sec_per_gpu: 2.01K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.00982 | hardware_tflops_per_gpu: 0.00983 | grad_norm: 0.559
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 996 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 38.7 | tokens_per_sec: 16.6K | tokens_per_sec_per_gpu: 2.07K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0101 | hardware_tflops_per_gpu: 0.0101 | grad_norm: 0.549
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 997 / 1000 | consumed_tokens: 638K | elapsed_time_per_iteration_ms: 47.1 | tokens_per_sec: 13.6K | tokens_per_sec_per_gpu: 1.7K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0083 | hardware_tflops_per_gpu: 0.0083 | grad_norm: 0.531
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 998 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 44.3 | tokens_per_sec: 14.5K | tokens_per_sec_per_gpu: 1.81K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.00883 | hardware_tflops_per_gpu: 0.00884 | grad_norm: 0.558
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 999 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 40.6 | tokens_per_sec: 15.8K | tokens_per_sec_per_gpu: 1.97K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.00963 | hardware_tflops_per_gpu: 0.00963 | grad_norm: 0.551
01/17/2024 13:47:27 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 1000 / 1000 | consumed_tokens: 640K | elapsed_time_per_iteration_ms: 38.7 | tokens_per_sec: 16.5K | tokens_per_sec_per_gpu: 2.07K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0101 | hardware_tflops_per_gpu: 0.0101 | grad_norm: 0.556

ZeRO-0: dp=2, tp=2, pp=1

01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 991 / 1000 | consumed_tokens: 634K | elapsed_time_per_iteration_ms: 35.7 | tokens_per_sec: 17.9K | tokens_per_sec_per_gpu: 4.48K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0219 | hardware_tflops_per_gpu: 0.0219 | grad_norm: 0.516 | cuda_memory_allocated: 83.3M | cuda_max_memory_reserved: 545M | hd_total_memory_tb: 312G | hd_used_memory_tb: 61.8G | hd_free_memory_tb: 250G
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 992 / 1000 | consumed_tokens: 635K | elapsed_time_per_iteration_ms: 34.8 | tokens_per_sec: 18.4K | tokens_per_sec_per_gpu: 4.59K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0225 | hardware_tflops_per_gpu: 0.0225 | grad_norm: 0.508
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 993 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 34.7 | tokens_per_sec: 18.5K | tokens_per_sec_per_gpu: 4.62K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0226 | hardware_tflops_per_gpu: 0.0226 | grad_norm: 0.514
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 994 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 42.4 | tokens_per_sec: 15.1K | tokens_per_sec_per_gpu: 3.77K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0184 | hardware_tflops_per_gpu: 0.0185 | grad_norm: 0.529
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 995 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 34.7 | tokens_per_sec: 18.5K | tokens_per_sec_per_gpu: 4.61K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0226 | hardware_tflops_per_gpu: 0.0226 | grad_norm: 0.56
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 996 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 35 | tokens_per_sec: 18.3K | tokens_per_sec_per_gpu: 4.57K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0223 | hardware_tflops_per_gpu: 0.0224 | grad_norm: 0.55
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 997 / 1000 | consumed_tokens: 638K | elapsed_time_per_iteration_ms: 34.8 | tokens_per_sec: 18.4K | tokens_per_sec_per_gpu: 4.6K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0225 | hardware_tflops_per_gpu: 0.0225 | grad_norm: 0.531
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 998 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 34.7 | tokens_per_sec: 18.4K | tokens_per_sec_per_gpu: 4.61K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0225 | hardware_tflops_per_gpu: 0.0225 | grad_norm: 0.559
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 999 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 34.6 | tokens_per_sec: 18.5K | tokens_per_sec_per_gpu: 4.62K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0226 | hardware_tflops_per_gpu: 0.0226 | grad_norm: 0.552
01/17/2024 13:48:41 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 1000 / 1000 | consumed_tokens: 640K | elapsed_time_per_iteration_ms: 34.4 | tokens_per_sec: 18.6K | tokens_per_sec_per_gpu: 4.65K | global_batch_size: 20 | lm_loss: 10.6 | lr: 1e-05 | model_tflops_per_gpu: 0.0227 | hardware_tflops_per_gpu: 0.0227 | grad_norm: 0.557
0

ZeRO-1: dp=2, tp=4, pp=1's logs

01/17/2024 14:02:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 991 / 1000 | consumed_tokens: 634K | elapsed_time_per_iteration_ms: 272 | tokens_per_sec: 2.35K | tokens_per_sec_per_gpu: 294 | global_batch_size: 20 | lm_loss: 9.63 | lr: 1e-05 | model_tflops_per_gpu: 0.00744 | hardware_tflops_per_gpu: 0.0075 | grad_norm: 1.11 | cuda_memory_allocated: 90.9M | cuda_max_memory_reserved: 570M | hd_total_memory_tb: 312G | hd_used_memory_tb: 61.8G | hd_free_memory_tb: 250G
01/17/2024 14:02:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 992 / 1000 | consumed_tokens: 635K | elapsed_time_per_iteration_ms: 268 | tokens_per_sec: 2.39K | tokens_per_sec_per_gpu: 298 | global_batch_size: 20 | lm_loss: 9.69 | lr: 1e-05 | model_tflops_per_gpu: 0.00755 | hardware_tflops_per_gpu: 0.00761 | grad_norm: 1.1
01/17/2024 14:02:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 993 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 269 | tokens_per_sec: 2.38K | tokens_per_sec_per_gpu: 298 | global_batch_size: 20 | lm_loss: 9.67 | lr: 1e-05 | model_tflops_per_gpu: 0.00754 | hardware_tflops_per_gpu: 0.00759 | grad_norm: 1.07
01/17/2024 14:02:16 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 994 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 277 | tokens_per_sec: 2.31K | tokens_per_sec_per_gpu: 289 | global_batch_size: 20 | lm_loss: 9.66 | lr: 1e-05 | model_tflops_per_gpu: 0.00731 | hardware_tflops_per_gpu: 0.00737 | grad_norm: 1.1
01/17/2024 14:02:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 995 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 275 | tokens_per_sec: 2.32K | tokens_per_sec_per_gpu: 291 | global_batch_size: 20 | lm_loss: 9.69 | lr: 1e-05 | model_tflops_per_gpu: 0.00735 | hardware_tflops_per_gpu: 0.00741 | grad_norm: 1.08
01/17/2024 14:02:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 996 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 269 | tokens_per_sec: 2.38K | tokens_per_sec_per_gpu: 298 | global_batch_size: 20 | lm_loss: 9.62 | lr: 1e-05 | model_tflops_per_gpu: 0.00754 | hardware_tflops_per_gpu: 0.00759 | grad_norm: 1.23
01/17/2024 14:02:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 997 / 1000 | consumed_tokens: 638K | elapsed_time_per_iteration_ms: 269 | tokens_per_sec: 2.38K | tokens_per_sec_per_gpu: 298 | global_batch_size: 20 | lm_loss: 9.68 | lr: 1e-05 | model_tflops_per_gpu: 0.00754 | hardware_tflops_per_gpu: 0.00759 | grad_norm: 1.25
01/17/2024 14:02:17 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 998 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 276 | tokens_per_sec: 2.31K | tokens_per_sec_per_gpu: 289 | global_batch_size: 20 | lm_loss: 9.61 | lr: 1e-05 | model_tflops_per_gpu: 0.00732 | hardware_tflops_per_gpu: 0.00738 | grad_norm: 1.18
01/17/2024 14:02:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 999 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 274 | tokens_per_sec: 2.33K | tokens_per_sec_per_gpu: 292 | global_batch_size: 20 | lm_loss: 9.7 | lr: 1e-05 | model_tflops_per_gpu: 0.00739 | hardware_tflops_per_gpu: 0.00744 | grad_norm: 1.22
01/17/2024 14:02:18 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 1000 / 1000 | consumed_tokens: 640K | elapsed_time_per_iteration_ms: 269 | tokens_per_sec: 2.38K | tokens_per_sec_per_gpu: 297 | global_batch_size: 20 | lm_loss: 9.61 | lr: 1e-05 | model_tflops_per_gpu: 0.00752 | hardware_tflops_per_gpu: 0.00757 | grad_norm: 1.14
01

ZeRO-1: dp=2, tp=2, pp=1's logs

aving weights: 100%|██████████| 123/123 [00:00<00:00, 565.02it/s]
01/17/2024 14:07:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 991 / 1000 | consumed_tokens: 634K | elapsed_time_per_iteration_ms: 258 | tokens_per_sec: 2.48K | tokens_per_sec_per_gpu: 621 | global_batch_size: 20 | lm_loss: 9.63 | lr: 1e-05 | model_tflops_per_gpu: 0.0157 | hardware_tflops_per_gpu: 0.0158 | grad_norm: 1.11 | cuda_memory_allocated: 114M | cuda_max_memory_reserved: 619M | hd_total_memory_tb: 312G | hd_used_memory_tb: 61.8G | hd_free_memory_tb: 250G
01/17/2024 14:07:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 992 / 1000 | consumed_tokens: 635K | elapsed_time_per_iteration_ms: 259 | tokens_per_sec: 2.47K | tokens_per_sec_per_gpu: 618 | global_batch_size: 20 | lm_loss: 9.69 | lr: 1e-05 | model_tflops_per_gpu: 0.0157 | hardware_tflops_per_gpu: 0.0158 | grad_norm: 1.1
01/17/2024 14:07:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 993 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 261 | tokens_per_sec: 2.46K | tokens_per_sec_per_gpu: 614 | global_batch_size: 20 | lm_loss: 9.67 | lr: 1e-05 | model_tflops_per_gpu: 0.0155 | hardware_tflops_per_gpu: 0.0157 | grad_norm: 1.07
01/17/2024 14:07:46 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 994 / 1000 | consumed_tokens: 636K | elapsed_time_per_iteration_ms: 254 | tokens_per_sec: 2.52K | tokens_per_sec_per_gpu: 630 | global_batch_size: 20 | lm_loss: 9.66 | lr: 1e-05 | model_tflops_per_gpu: 0.016 | hardware_tflops_per_gpu: 0.0161 | grad_norm: 1.1
01/17/2024 14:07:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 995 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 258 | tokens_per_sec: 2.49K | tokens_per_sec_per_gpu: 621 | global_batch_size: 20 | lm_loss: 9.7 | lr: 1e-05 | model_tflops_per_gpu: 0.0157 | hardware_tflops_per_gpu: 0.0158 | grad_norm: 1.08
01/17/2024 14:07:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 996 / 1000 | consumed_tokens: 637K | elapsed_time_per_iteration_ms: 263 | tokens_per_sec: 2.43K | tokens_per_sec_per_gpu: 608 | global_batch_size: 20 | lm_loss: 9.63 | lr: 1e-05 | model_tflops_per_gpu: 0.0154 | hardware_tflops_per_gpu: 0.0155 | grad_norm: 1.34
01/17/2024 14:07:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 997 / 1000 | consumed_tokens: 638K | elapsed_time_per_iteration_ms: 258 | tokens_per_sec: 2.48K | tokens_per_sec_per_gpu: 621 | global_batch_size: 20 | lm_loss: 9.68 | lr: 1e-05 | model_tflops_per_gpu: 0.0157 | hardware_tflops_per_gpu: 0.0158 | grad_norm: 1.35
01/17/2024 14:07:47 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 998 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 254 | tokens_per_sec: 2.52K | tokens_per_sec_per_gpu: 630 | global_batch_size: 20 | lm_loss: 9.61 | lr: 1e-05 | model_tflops_per_gpu: 0.0159 | hardware_tflops_per_gpu: 0.0161 | grad_norm: 1.18
01/17/2024 14:07:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 999 / 1000 | consumed_tokens: 639K | elapsed_time_per_iteration_ms: 259 | tokens_per_sec: 2.47K | tokens_per_sec_per_gpu: 618 | global_batch_size: 20 | lm_loss: 9.7 | lr: 1e-05 | model_tflops_per_gpu: 0.0156 | hardware_tflops_per_gpu: 0.0158 | grad_norm: 1.22
01/17/2024 14:07:48 [INFO|DP=0|PP=0|TP=0|ip-26-0-160-225]: iteration: 1000 / 1000 | consumed_tokens: 640K | elapsed_time_per_iteration_ms: 268 | tokens_per_sec: 2.39K | tokens_per_sec_per_gpu: 598 | global_batch_size: 20 | lm_loss: 9.62 | lr: 1e-05 | model_tflops_per_gpu: 0.0151 | hardware_tflops_per_gpu: 0.0152 | grad_norm: 1.14
01/17/2024 14:07:48 [WARNING|DP=0|PP=0|TP=0|ip-26-0-160-225]: Saving checkpoint a

Log files

training_output_zero_stage_0.log

training_output_zero_stage_1.log

3outeille commented 10 months ago

lgtm !