This PR adds basic support for scaled optimizer state as discussed in the MS-AMP paper. The idea is that per-tensor scaling factors along with FP16/FP8 optimizer state results in lower memory usage than FP32 optimizer state with no degradation in convergence. This implementation is not quite the same as the MS-AMP FP8 optimizer since it only uses FP16 optimizer state and uses per-parameter-fragment scaling factors rather than per-parameter. It is a preliminary implementation and its performance could be improved with custom kernels (e.g. kernel to compute scaling factors, fused kernel with FP16-FP32 casts and Adam step).
In the process of debugging, I've also made some other performance optimizations and bugfixes:
Generalize support for overlapping first grad sync with optimizer step (to be used for NeMo FP8 support)
Fix bug where loading checkpoint does not load parameter group configs
This PR adds basic support for scaled optimizer state as discussed in the MS-AMP paper. The idea is that per-tensor scaling factors along with FP16/FP8 optimizer state results in lower memory usage than FP32 optimizer state with no degradation in convergence. This implementation is not quite the same as the MS-AMP FP8 optimizer since it only uses FP16 optimizer state and uses per-parameter-fragment scaling factors rather than per-parameter. It is a preliminary implementation and its performance could be improved with custom kernels (e.g. kernel to compute scaling factors, fused kernel with FP16-FP32 casts and Adam step).
In the process of debugging, I've also made some other performance optimizations and bugfixes: