Closed Muennighoff closed 1 year ago
With the default causal mask (i.e. samples attend to other samples):
[Rank 0]: iteration 10/ 5000 | consumed samples: 1920 | elapsed time per iteration (ms): 32054.4 | learning rate: 5.600E-05 | global batch size: 192 | lm loss: 1.075286E+01 | loss scale: 16384.0 | grad norm: 55.755 | number of skipped iterations: 3 | number of nan iterations: 0 | TFLOPs: 97.26 | [Rank 0] (after 10 iterations) memory (MB) | allocated: 21713.9619140625 | max allocated: 24435.7236328125 | reserved: 25182.0 | max reserved: 25182.0 [Rank 0]: time (ms) | forward-compute: 11368.85 | backward-compute: 20598.18 | backward-params-all-reduce: 2.70 | backward-layernorm-all-reduce: 0.01 | backward-embedding-all-reduce: 0.02 | backward-reduce-model-grads: 2.76 | backward-gather-model-params: 0.01 | optimizer-copy-to-main-grad: 10.81 | optimizer-unscale-and-check-inf: 14.68 | optimizer-clip-main-grad: 8.47 | optimizer-count-zeros: 0.00 | optimizer-inner-step: 19.36 | optimizer-copy-main-to-model-params: 7.02 | optimizer: 60.43 | batch-generator: 222.70 [Rank 0]: iteration 20/ 5000 | consumed samples: 3840 | elapsed time per iteration (ms): 31846.3 | learning rate: 1.360E-04 | global batch size: 192 | lm loss: 8.225761E+00 | loss scale: 16384.0 | grad norm: 3.159 | number of skipped iterations: 0 | number of nan iterations: 0 | TFLOPs: 97.90 | [Rank 0]: time (ms) | forward-compute: 11244.83 | backward-compute: 20511.30 | backward-params-all-reduce: 2.70 | backward-layernorm-all-reduce: 0.01 | backward-embedding-all-reduce: 0.02 | backward-reduce-model-grads: 2.76 | backward-gather-model-params: 0.01 | optimizer-copy-to-main-grad: 10.77 | optimizer-unscale-and-check-inf: 6.97 | optimizer-clip-main-grad: 11.01 | optimizer-count-zeros: 0.01 | optimizer-inner-step: 22.72 | optimizer-copy-main-to-model-params: 10.02 | optimizer: 61.56 | batch-generator: 200.59
With the correct mask via --no-masked-softmax-fusion (Major drop in TFLOPs, but better loss):
--no-masked-softmax-fusion
[Rank 0]: iteration 10/ 5000 | consumed samples: 1920 | elapsed time per iteration (ms): 57834.3 | learning rate: 5.600E-05 | global batch size: 192 | lm loss: 1.075210E+01 | loss scale: 16384.0 | grad norm: 55.922 | number of skipped iterations: 3 | number of nan iterations: 0 | TFLOPs: 53.91 | [Rank 0] (after 10 iterations) memory (MB) | allocated: 21715.9619140625 | max allocated: 25364.8935546875 | reserved: 26590.0 | max reserved: 26590.0 [Rank 0]: time (ms) | forward-compute: 18768.40 | backward-compute: 38982.99 | backward-params-all-reduce: 2.71 | backward-layernorm-all-reduce: 0.01 | backward-embedding-all-reduce: 0.02 | backward-reduce-model-grads: 2.77 | backward-gather-model-params: 0.00 | optimizer-copy-to-main-grad: 10.83 | optimizer-unscale-and-check-inf: 11.11 | optimizer-clip-main-grad: 7.83 | optimizer-count-zeros: 0.00 | optimizer-inner-step: 18.74 | optimizer-copy-main-to-model-params: 7.01 | optimizer: 55.59 | batch-generator: 222.53 [Rank 0]: iteration 20/ 5000 | consumed samples: 3840 | elapsed time per iteration (ms): 57644.9 | learning rate: 1.360E-04 | global batch size: 192 | lm loss: 8.204919E+00 | loss scale: 16384.0 | grad norm: 2.929 | number of skipped iterations: 0 | number of nan iterations: 0 | TFLOPs: 54.09 | [Rank 0]: time (ms) | forward-compute: 18671.47 | backward-compute: 38885.51 | backward-params-all-reduce: 2.71 | backward-layernorm-all-reduce: 0.01 | backward-embedding-all-reduce: 0.02 | backward-reduce-model-grads: 2.77 | backward-gather-model-params: 0.01 | optimizer-copy-to-main-grad: 10.84 | optimizer-unscale-and-check-inf: 6.97 | optimizer-clip-main-grad: 11.06 | optimizer-count-zeros: 0.01 | optimizer-inner-step: 22.70 | optimizer-copy-main-to-model-params: 10.01 | optimizer: 61.65 | batch-generator: 203.01
With the correct mask via this PR (i.e. still fused; Loss is the same as above but only tiny drop in TFLOPs):
[Rank 0]: iteration 10/ 5000 | consumed samples: 1920 | elapsed time per iteration (ms): 32858.4 | learning rate: 5.600E-05 | global batch size: 192 | lm loss: 1.075210E+01 | loss scale: 16384.0 | grad norm: 55.922 | number of skipped iterations: 3 | number of nan iterations: 0 | TFLOPs: 94.88 | [Rank 0]: [Rank 0] (after 10 iterations) memory (MB) | allocated: 21713.9619140625 | max allocated: 24435.7236328125 | reserved: 25182.0 | max reserved: 25182.0 [Rank 0]: time (ms) | forward-compute: 11700.80 | backward-compute: 21072.64 | backward-params-all-reduce: 2.70 | backward-layernorm-all-reduce: 0.01 | backward-embedding-all-reduce: 0.02 | backward-reduce-model-grads: 2.76 | backward-gather-model-params: 0.00 | optimizer-copy-to-main-grad: 10.82 | optimizer-unscale-and-check-inf: 12.83 | optimizer-clip-main-grad: 7.81 | optimizer-count-zeros: 0.00 | optimizer-inner-step: 19.37 | optimizer-copy-main-to-model-params: 7.03 | optimizer: 57.92 | batch-generator: 233.84 [Rank 0]: iteration 20/ 5000 | consumed samples: 3840 | elapsed time per iteration (ms): 32656.7 | learning rate: 1.360E-04 | global batch size: 192 | lm loss: 8.204919E+00 | loss scale: 16384.0 | grad norm: 2.925 | number of skipped iterations: 0 | number of nan iterations: 0 | TFLOPs: 95.47 | [Rank 0]: time (ms) | forward-compute: 11591.05 | backward-compute: 20977.19 | backward-params-all-reduce: 2.70 | backward-layernorm-all-reduce: 0.01 | backward-embedding-all-reduce: 0.02 | backward-reduce-model-grads: 2.76 | backward-gather-model-params: 0.01 | optimizer-copy-to-main-grad: 10.79 | optimizer-unscale-and-check-inf: 6.97 | optimizer-clip-main-grad: 11.07 | optimizer-count-zeros: 0.01 | optimizer-inner-step: 22.70 | optimizer-copy-main-to-model-params: 10.02 | optimizer: 61.62 | batch-generator: 195.23
With the default causal mask (i.e. samples attend to other samples):
With the correct mask via
--no-masked-softmax-fusion
(Major drop in TFLOPs, but better loss):With the correct mask via this PR (i.e. still fused; Loss is the same as above but only tiny drop in TFLOPs):