carbonscott / exp-maxie

0 stars 0 forks source link

See if we can overfit a small subset. #2

Open carbonscott opened 2 hours ago

carbonscott commented 2 hours ago

Nan loss in float16, 1e5 loss in float32, try to see if we can overfit a small subset.

carbonscott commented 2 hours ago

Check /gpfs/alpine2/proj-shared/lrn044/foundation_models/results/cwang31/exp-maxie/experiments/logs/633m-ds120+norm_pix_loss+float32.2024_0920_0017_17/rank0.log:

...
rank=0 | logevent=LOSS:TRAIN | iteration=69 | segment=0-120 | learning_rate=0.0002999999975957777 | grad_norm=0.087725 | mean_train_loss=0.453145 | tokens_per_sec=1.7e+05 | mfu=0.439 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=70 | segment=0-120 | learning_rate=0.00029999999751067905 | grad_norm=0.041372 | mean_train_loss=0.454524 | tokens_per_sec=1.7e+05 | mfu=0.429 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=71 | segment=0-120 | learning_rate=0.0002999999974241004 | grad_norm=0.016367 | mean_train_loss=0.456401 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=72 | segment=0-120 | learning_rate=0.0002999999973360418 | grad_norm=0.022696 | mean_train_loss=0.456690 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=73 | segment=0-120 | learning_rate=0.00029999999724650316 | grad_norm=0.021479 | mean_train_loss=0.456386 | tokens_per_sec=1.7e+05 | mfu=0.431 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=74 | segment=0-120 | learning_rate=0.0002999999971554846 | grad_norm=0.021746 | mean_train_loss=0.455900 | tokens_per_sec=1.7e+05 | mfu=0.424 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=75 | segment=0-120 | learning_rate=0.00029999999706298606 | grad_norm=0.018131 | mean_train_loss=0.454985 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=76 | segment=0-120 | learning_rate=0.0002999999969690075 | grad_norm=0.020247 | mean_train_loss=0.453553 | tokens_per_sec=1.6e+05 | mfu=0.414 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=77 | segment=0-120 | learning_rate=0.00029999999687354905 | grad_norm=0.074332 | mean_train_loss=0.455680 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=78 | segment=0-120 | learning_rate=0.0002999999967766106 | grad_norm=0.047885 | mean_train_loss=0.452476 | tokens_per_sec=1.7e+05 | mfu=0.437 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=79 | segment=0-120 | learning_rate=0.0002999999966781921 | grad_norm=0.029300 | mean_train_loss=0.452271 | tokens_per_sec=1.7e+05 | mfu=0.437 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=80 | segment=0-120 | learning_rate=0.00029999999657829364 | grad_norm=0.034628 | mean_train_loss=0.452264 | tokens_per_sec=1.7e+05 | mfu=0.421 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=81 | segment=0-120 | learning_rate=0.00029999999647691526 | grad_norm=0.042110 | mean_train_loss=0.452345 | tokens_per_sec=1.7e+05 | mfu=0.437 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=82 | segment=0-120 | learning_rate=0.0002999999963740569 | grad_norm=0.034648 | mean_train_loss=0.451235 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=83 | segment=0-120 | learning_rate=0.00029999999626971854 | grad_norm=0.040431 | mean_train_loss=0.451725 | tokens_per_sec=1.7e+05 | mfu=0.439 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=84 | segment=0-120 | learning_rate=0.0002999999961639002 | grad_norm=0.039443 | mean_train_loss=0.450854 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=85 | segment=0-120 | learning_rate=0.00029999999605660185 | grad_norm=0.029570 | mean_train_loss=0.450786 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=86 | segment=0-120 | learning_rate=0.0002999999959478236 | grad_norm=0.045936 | mean_train_loss=0.450742 | tokens_per_sec=1.7e+05 | mfu=0.430 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=87 | segment=0-120 | learning_rate=0.0002999999958375653 | grad_norm=0.024533 | mean_train_loss=0.450054 | tokens_per_sec=1.6e+05 | mfu=0.415 | grad_nosync_counter=2
rank=0 | logevent=LOSS:TRAIN | iteration=88 | segment=0-120 | learning_rate=0.00029999999572582705 | grad_norm=0.038046 | mean_train_loss=0.449761 | tokens_per_sec=1.7e+05 | mfu=0.438 | grad_nosync_counter=2

Let's see if we can overfit it.