jzhang38 / TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Apache License 2.0
7.3k stars 425 forks source link

fp16 finetune will loss=0 #153

Open sankexin opened 5 months ago

sankexin commented 5 months ago

TinyLlama-1.1B-intermediate-step-240k-503b:

train metrics epoch = 5.0 train_loss = 0.0063 train_runtime = 2:05:47.06 train_samples_per_second = 6.184 train_steps_per_second = 0.387 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 128/128 [00:25<00:00, 5.08it/s] eval metrics epoch = 5.0 eval_loss = nan eval_runtime = 0:00:25.70 eval_samples_per_second = 19.921 eval_steps_per_second = 4.98 wandb: Waiting for W&B process to finish... (success). wandb: wandb: Run history: wandb: eval/runtime ▂▁▁█▄▂ wandb: eval/samples_per_second ▇██▁▅▇ wandb: eval/steps_per_second ▇██▁▅▇ wandb: train/epoch ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ wandb: train/global_step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███ wandb: train/learning_rate ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/loss ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁ wandb: train/total_flos ▁ wandb: train/train_loss ▁ wandb: train/train_runtime ▁ wandb: train/train_samples_per_second ▁ wandb: train/train_steps_per_second ▁ wandb: wandb: Run summary: wandb: eval/loss nan wandb: eval/runtime 25.7021 wandb: eval/samples_per_second 19.921 wandb: eval/steps_per_second 4.98 wandb: train/epoch 5.0 wandb: train/global_step 2920 wandb: train/learning_rate 1e-05 wandb: train/loss 0.0 wandb: train/total_flos 1.4150772030072422e+17 wandb: train/train_loss 0.00628 wandb: train/train_runtime 7547.0665 wandb: train/train_samples_per_second 6.184 wandb: train/train_steps_per_second 0.387