NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.61k stars 256 forks source link

[PyTorch] Adjust checkpointing of FP8 metadata for attention #917

Closed cyanguwa closed 2 weeks ago

cyanguwa commented 3 weeks ago

Description

This PR relocates the FP8 metadata for attention from FusedAttention to DotProductAttention. It makes DotProductAttention a TransformerEngineBaseModule and FusedAttention a torch.nn.module. In the future, core_attention._extra_state will be the centralized place for FP8 metadata for any attention backend, instead of core_attention.fused_attention._extra_state which was just for FusedAttention (originated from #768 ).

Type of change

Changes

Please list the changes introduced in this PR:

Checklist:

cyanguwa commented 3 weeks ago

/te-ci pytorch

cyanguwa commented 3 weeks ago

/te-ci pytorch

cyanguwa commented 2 weeks ago

/te-ci pytorch