A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
Please include a brief summary of the changes, relevant motivation and context.
Fixes # (issue)
Type of change
[ ] Documentation change (change only to the documentation, either a fix or a new content)
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
[ ] Infra/Build change
[ ] Code refractor
Changes
This has been mentioned in #560 but somehow someone just changed it back... Importing , which imports , after defining nv_bfloat16 triggers re-declaration error.
Description
Please include a brief summary of the changes, relevant motivation and context.
Fixes # (issue)
Type of change
Changes
This has been mentioned in #560 but somehow someone just changed it back... Importing, which imports , after defining nv_bfloat16 triggers re-declaration error.
Checklist: