A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
This PR updates cudnn-frontend to 1.8.0, and should be used as a PR to cherry-pick to 1.12 release. #1244 has already made this update on main for a different feature.
Type of change
[ ] Documentation change (change only to the documentation, either a fix or a new content)
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Description
This PR updates cudnn-frontend to 1.8.0, and should be used as a PR to cherry-pick to 1.12 release. #1244 has already made this update on
main
for a different feature.Type of change
Changes
Please list the changes introduced in this PR:
Checklist: