NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.61k stars 256 forks source link

[PyTorch] Add support for cuDNN FusedAttention + THD + CP #885

Closed xrennvidia closed 3 weeks ago

xrennvidia commented 1 month ago

Description

Add support for cuDNN FA+THD+CP

Type of change

Changes

Please list the changes introduced in this PR:

Checklist:

cyanguwa commented 1 month ago

/te-ci pytorch

cyanguwa commented 4 weeks ago

Thanks for submitting the PR. Could you use our template to fill in the PR description please?

# Description

Please include a brief summary of the changes, relevant motivation and context.

Fixes # (issue)

## Type of change

- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)

## Changes

Please list the changes introduced in this PR:

- Change A
- Change B

# Checklist:

- [ ] I have read and followed the [contributing guidelines](https://github.com/NVIDIA/TransformerEngine/blob/main/CONTRIBUTING.rst)
- [ ] The functionality is complete
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
cyanguwa commented 4 weeks ago

/te-ci pytorch

cyanguwa commented 3 weeks ago

/te-ci pytorch